00:00:00.000 Started by upstream project "autotest-per-patch" build number 132782 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.143 Using shallow fetch with depth 1 00:00:00.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.143 > git --version # timeout=10 00:00:00.164 > git --version # 'git version 2.39.2' 00:00:00.164 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.730 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.742 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.753 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.753 > git config core.sparsecheckout # timeout=10 00:00:05.762 > git read-tree -mu HEAD # timeout=10 00:00:05.775 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.796 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.797 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.890 [Pipeline] Start of Pipeline 00:00:05.902 [Pipeline] library 00:00:05.904 Loading library shm_lib@master 00:00:05.904 Library shm_lib@master is cached. Copying from home. 00:00:05.918 [Pipeline] node 00:00:05.926 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.928 [Pipeline] { 00:00:05.937 [Pipeline] catchError 00:00:05.938 [Pipeline] { 00:00:05.950 [Pipeline] wrap 00:00:05.957 [Pipeline] { 00:00:05.965 [Pipeline] stage 00:00:05.967 [Pipeline] { (Prologue) 00:00:06.178 [Pipeline] sh 00:00:06.465 + logger -p user.info -t JENKINS-CI 00:00:06.480 [Pipeline] echo 00:00:06.481 Node: WFP6 00:00:06.486 [Pipeline] sh 00:00:06.782 [Pipeline] setCustomBuildProperty 00:00:06.798 [Pipeline] echo 00:00:06.800 Cleanup processes 00:00:06.806 [Pipeline] sh 00:00:07.088 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.088 2382907 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.100 [Pipeline] sh 00:00:07.380 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.380 ++ grep -v 'sudo pgrep' 00:00:07.380 ++ awk '{print $1}' 00:00:07.380 + sudo kill -9 00:00:07.380 + true 00:00:07.395 [Pipeline] cleanWs 00:00:07.405 [WS-CLEANUP] Deleting project workspace... 00:00:07.406 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.412 [WS-CLEANUP] done 00:00:07.416 [Pipeline] setCustomBuildProperty 00:00:07.428 [Pipeline] sh 00:00:07.708 + sudo git config --global --replace-all safe.directory '*' 00:00:07.787 [Pipeline] httpRequest 00:00:08.288 [Pipeline] echo 00:00:08.289 Sorcerer 10.211.164.101 is alive 00:00:08.297 [Pipeline] retry 00:00:08.298 [Pipeline] { 00:00:08.309 [Pipeline] httpRequest 00:00:08.312 HttpMethod: GET 00:00:08.313 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.314 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.324 Response Code: HTTP/1.1 200 OK 00:00:08.325 Success: Status code 200 is in the accepted range: 200,404 00:00:08.325 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.157 [Pipeline] } 00:00:22.173 [Pipeline] // retry 00:00:22.180 [Pipeline] sh 00:00:22.465 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.480 [Pipeline] httpRequest 00:00:23.920 [Pipeline] echo 00:00:23.922 Sorcerer 10.211.164.101 is alive 00:00:23.931 [Pipeline] retry 00:00:23.932 [Pipeline] { 00:00:23.944 [Pipeline] httpRequest 00:00:23.948 HttpMethod: GET 00:00:23.949 URL: http://10.211.164.101/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:23.950 Sending request to url: http://10.211.164.101/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:23.959 Response Code: HTTP/1.1 200 OK 00:00:23.960 Success: Status code 200 is in the accepted range: 200,404 00:00:23.960 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:04:29.428 [Pipeline] } 00:04:29.446 [Pipeline] // retry 00:04:29.455 [Pipeline] sh 00:04:29.741 + tar --no-same-owner -xf spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:04:32.281 [Pipeline] sh 00:04:32.563 + git -C spdk log --oneline -n5 00:04:32.563 496bfd677 env: match legacy mem mode config with DPDK 00:04:32.563 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:04:32.563 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:04:32.563 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:04:32.563 0ea9ac02f accel/mlx5: Create pool of UMRs 00:04:32.574 [Pipeline] } 00:04:32.588 [Pipeline] // stage 00:04:32.597 [Pipeline] stage 00:04:32.599 [Pipeline] { (Prepare) 00:04:32.614 [Pipeline] writeFile 00:04:32.629 [Pipeline] sh 00:04:32.911 + logger -p user.info -t JENKINS-CI 00:04:32.925 [Pipeline] sh 00:04:33.217 + logger -p user.info -t JENKINS-CI 00:04:33.229 [Pipeline] sh 00:04:33.513 + cat autorun-spdk.conf 00:04:33.514 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:33.514 SPDK_TEST_NVMF=1 00:04:33.514 SPDK_TEST_NVME_CLI=1 00:04:33.514 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:33.514 SPDK_TEST_NVMF_NICS=e810 00:04:33.514 SPDK_TEST_VFIOUSER=1 00:04:33.514 SPDK_RUN_UBSAN=1 00:04:33.514 NET_TYPE=phy 00:04:33.521 RUN_NIGHTLY=0 00:04:33.526 [Pipeline] readFile 00:04:33.551 [Pipeline] withEnv 00:04:33.554 [Pipeline] { 00:04:33.567 [Pipeline] sh 00:04:33.868 + set -ex 00:04:33.868 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:33.868 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:33.868 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:33.868 ++ SPDK_TEST_NVMF=1 00:04:33.868 ++ SPDK_TEST_NVME_CLI=1 00:04:33.868 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:33.868 ++ SPDK_TEST_NVMF_NICS=e810 00:04:33.868 ++ SPDK_TEST_VFIOUSER=1 00:04:33.868 ++ SPDK_RUN_UBSAN=1 00:04:33.868 ++ NET_TYPE=phy 00:04:33.868 ++ RUN_NIGHTLY=0 00:04:33.868 + case $SPDK_TEST_NVMF_NICS in 00:04:33.868 + DRIVERS=ice 00:04:33.868 + [[ tcp == \r\d\m\a ]] 00:04:33.868 + [[ -n ice ]] 00:04:33.868 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:33.868 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:33.868 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:33.869 rmmod: ERROR: Module irdma is not currently loaded 00:04:33.869 rmmod: ERROR: Module i40iw is not currently loaded 00:04:33.869 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:33.869 + true 00:04:33.869 + for D in $DRIVERS 00:04:33.869 + sudo modprobe ice 00:04:33.869 + exit 0 00:04:33.878 [Pipeline] } 00:04:33.894 [Pipeline] // withEnv 00:04:33.900 [Pipeline] } 00:04:33.914 [Pipeline] // stage 00:04:33.923 [Pipeline] catchError 00:04:33.925 [Pipeline] { 00:04:33.939 [Pipeline] timeout 00:04:33.939 Timeout set to expire in 1 hr 0 min 00:04:33.941 [Pipeline] { 00:04:33.956 [Pipeline] stage 00:04:33.958 [Pipeline] { (Tests) 00:04:33.972 [Pipeline] sh 00:04:34.257 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:34.257 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:34.257 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:34.257 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:34.257 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.257 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:34.257 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:34.257 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:34.257 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:34.257 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:34.257 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:34.257 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:34.257 + source /etc/os-release 00:04:34.257 ++ NAME='Fedora Linux' 00:04:34.257 ++ VERSION='39 (Cloud Edition)' 00:04:34.257 ++ ID=fedora 00:04:34.258 ++ VERSION_ID=39 00:04:34.258 ++ VERSION_CODENAME= 00:04:34.258 ++ PLATFORM_ID=platform:f39 00:04:34.258 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:34.258 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:34.258 ++ LOGO=fedora-logo-icon 00:04:34.258 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:34.258 ++ HOME_URL=https://fedoraproject.org/ 00:04:34.258 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:34.258 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:34.258 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:34.258 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:34.258 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:34.258 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:34.258 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:34.258 ++ SUPPORT_END=2024-11-12 00:04:34.258 ++ VARIANT='Cloud Edition' 00:04:34.258 ++ VARIANT_ID=cloud 00:04:34.258 + uname -a 00:04:34.258 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:34.258 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:36.795 Hugepages 00:04:36.795 node hugesize free / total 00:04:36.795 node0 1048576kB 0 / 0 00:04:36.795 node0 2048kB 0 / 0 00:04:36.795 node1 1048576kB 0 / 0 00:04:36.795 node1 2048kB 0 / 0 00:04:36.795 00:04:36.795 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.795 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:36.795 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:36.795 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:36.795 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:36.795 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:36.795 + rm -f /tmp/spdk-ld-path 00:04:36.795 + source autorun-spdk.conf 00:04:36.795 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:36.795 ++ SPDK_TEST_NVMF=1 00:04:36.795 ++ SPDK_TEST_NVME_CLI=1 00:04:36.795 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:36.795 ++ SPDK_TEST_NVMF_NICS=e810 00:04:36.795 ++ SPDK_TEST_VFIOUSER=1 00:04:36.795 ++ SPDK_RUN_UBSAN=1 00:04:36.795 ++ NET_TYPE=phy 00:04:36.795 ++ RUN_NIGHTLY=0 00:04:36.795 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:36.795 + [[ -n '' ]] 00:04:36.795 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.795 + for M in /var/spdk/build-*-manifest.txt 00:04:36.795 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:36.795 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:36.795 + for M in /var/spdk/build-*-manifest.txt 00:04:36.795 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:36.795 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:36.795 + for M in /var/spdk/build-*-manifest.txt 00:04:36.795 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:36.795 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:36.795 ++ uname 00:04:36.795 + [[ Linux == \L\i\n\u\x ]] 00:04:36.795 + sudo dmesg -T 00:04:37.056 + sudo dmesg --clear 00:04:37.056 + dmesg_pid=2384857 00:04:37.056 + [[ Fedora Linux == FreeBSD ]] 00:04:37.056 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:37.056 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:37.056 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:37.056 + [[ -x /usr/src/fio-static/fio ]] 00:04:37.056 + export FIO_BIN=/usr/src/fio-static/fio 00:04:37.056 + FIO_BIN=/usr/src/fio-static/fio 00:04:37.056 + sudo dmesg -Tw 00:04:37.056 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:37.056 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:37.056 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:37.056 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:37.056 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:37.056 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:37.056 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:37.056 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:37.056 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:37.056 10:15:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:37.056 10:15:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:37.056 10:15:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:37.056 10:15:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:37.056 10:15:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:37.056 10:15:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:37.056 10:15:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.056 10:15:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:37.056 10:15:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:37.056 10:15:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.056 10:15:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.056 10:15:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.056 10:15:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.056 10:15:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.056 10:15:14 -- paths/export.sh@5 -- $ export PATH 00:04:37.056 10:15:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.056 10:15:14 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:37.056 10:15:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:37.056 10:15:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733735714.XXXXXX 00:04:37.056 10:15:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733735714.xKXuOc 00:04:37.056 10:15:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:37.056 10:15:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:37.056 10:15:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:37.056 10:15:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:37.056 10:15:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:37.056 10:15:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:37.056 10:15:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:37.056 10:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:04:37.056 10:15:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:37.056 10:15:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:37.056 10:15:14 -- pm/common@17 -- $ local monitor 00:04:37.056 10:15:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.056 10:15:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.056 10:15:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.056 10:15:14 -- pm/common@21 -- $ date +%s 00:04:37.056 10:15:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.056 10:15:14 -- pm/common@21 -- $ date +%s 00:04:37.056 10:15:14 -- pm/common@25 -- $ sleep 1 00:04:37.056 10:15:14 -- pm/common@21 -- $ date +%s 00:04:37.056 10:15:14 -- pm/common@21 -- $ date +%s 00:04:37.056 10:15:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735714 00:04:37.056 10:15:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735714 00:04:37.056 10:15:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735714 00:04:37.056 10:15:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735714 00:04:37.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735714_collect-cpu-load.pm.log 00:04:37.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735714_collect-vmstat.pm.log 00:04:37.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735714_collect-cpu-temp.pm.log 00:04:37.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735714_collect-bmc-pm.bmc.pm.log 00:04:38.252 10:15:15 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:38.252 10:15:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:38.252 10:15:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:38.252 10:15:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.252 10:15:15 -- spdk/autobuild.sh@16 -- $ date -u 00:04:38.252 Mon Dec 9 09:15:15 AM UTC 2024 00:04:38.252 10:15:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:38.252 v25.01-pre-312-g496bfd677 00:04:38.252 10:15:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:38.252 10:15:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:38.252 10:15:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:38.252 10:15:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:38.252 10:15:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:38.252 10:15:15 -- common/autotest_common.sh@10 -- $ set +x 00:04:38.252 ************************************ 00:04:38.252 START TEST ubsan 00:04:38.252 ************************************ 00:04:38.252 10:15:15 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:38.252 using ubsan 00:04:38.252 00:04:38.252 real 0m0.000s 00:04:38.252 user 0m0.000s 00:04:38.252 sys 0m0.000s 00:04:38.252 10:15:15 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:38.252 10:15:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:38.252 ************************************ 00:04:38.252 END TEST ubsan 00:04:38.252 ************************************ 00:04:38.252 10:15:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:38.252 10:15:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:38.252 10:15:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:38.252 10:15:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:38.510 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:38.510 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:38.768 Using 'verbs' RDMA provider 00:04:51.537 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:03.815 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:03.815 Creating mk/config.mk...done. 00:05:03.815 Creating mk/cc.flags.mk...done. 00:05:03.815 Type 'make' to build. 00:05:03.815 10:15:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:05:03.815 10:15:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:03.815 10:15:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:03.815 10:15:41 -- common/autotest_common.sh@10 -- $ set +x 00:05:03.815 ************************************ 00:05:03.815 START TEST make 00:05:03.815 ************************************ 00:05:03.815 10:15:41 make -- common/autotest_common.sh@1129 -- $ make -j96 00:05:04.088 make[1]: Nothing to be done for 'all'. 00:05:05.471 The Meson build system 00:05:05.471 Version: 1.5.0 00:05:05.471 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:05.471 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:05.471 Build type: native build 00:05:05.471 Project name: libvfio-user 00:05:05.471 Project version: 0.0.1 00:05:05.471 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:05.471 C linker for the host machine: cc ld.bfd 2.40-14 00:05:05.471 Host machine cpu family: x86_64 00:05:05.471 Host machine cpu: x86_64 00:05:05.471 Run-time dependency threads found: YES 00:05:05.471 Library dl found: YES 00:05:05.471 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:05.471 Run-time dependency json-c found: YES 0.17 00:05:05.471 Run-time dependency cmocka found: YES 1.1.7 00:05:05.471 Program pytest-3 found: NO 00:05:05.471 Program flake8 found: NO 00:05:05.471 Program misspell-fixer found: NO 00:05:05.471 Program restructuredtext-lint found: NO 00:05:05.471 Program valgrind found: YES (/usr/bin/valgrind) 00:05:05.471 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:05.471 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:05.471 Compiler for C supports arguments -Wwrite-strings: YES 00:05:05.471 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:05.471 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:05.471 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:05.471 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:05.471 Build targets in project: 8 00:05:05.471 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:05.471 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:05.471 00:05:05.471 libvfio-user 0.0.1 00:05:05.471 00:05:05.472 User defined options 00:05:05.472 buildtype : debug 00:05:05.472 default_library: shared 00:05:05.472 libdir : /usr/local/lib 00:05:05.472 00:05:05.472 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:06.037 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:06.037 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:06.037 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:06.037 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:06.037 [4/37] Compiling C object samples/null.p/null.c.o 00:05:06.037 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:06.037 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:06.037 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:06.037 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:06.037 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:06.037 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:06.038 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:06.038 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:06.038 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:06.038 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:06.038 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:06.038 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:06.038 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:06.038 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:06.038 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:06.038 [20/37] Compiling C object samples/server.p/server.c.o 00:05:06.038 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:06.295 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:06.295 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:06.295 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:06.295 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:06.295 [26/37] Compiling C object samples/client.p/client.c.o 00:05:06.295 [27/37] Linking target samples/client 00:05:06.295 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:06.295 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:06.295 [30/37] Linking target test/unit_tests 00:05:06.295 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:05:06.569 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:06.569 [33/37] Linking target samples/gpio-pci-idio-16 00:05:06.569 [34/37] Linking target samples/null 00:05:06.569 [35/37] Linking target samples/lspci 00:05:06.569 [36/37] Linking target samples/server 00:05:06.569 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:06.569 INFO: autodetecting backend as ninja 00:05:06.569 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:06.569 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:07.137 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:07.137 ninja: no work to do. 00:05:12.404 The Meson build system 00:05:12.404 Version: 1.5.0 00:05:12.404 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:12.404 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:12.404 Build type: native build 00:05:12.404 Program cat found: YES (/usr/bin/cat) 00:05:12.404 Project name: DPDK 00:05:12.404 Project version: 24.03.0 00:05:12.404 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:12.404 C linker for the host machine: cc ld.bfd 2.40-14 00:05:12.404 Host machine cpu family: x86_64 00:05:12.404 Host machine cpu: x86_64 00:05:12.404 Message: ## Building in Developer Mode ## 00:05:12.404 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:12.404 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:12.404 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:12.404 Program python3 found: YES (/usr/bin/python3) 00:05:12.404 Program cat found: YES (/usr/bin/cat) 00:05:12.404 Compiler for C supports arguments -march=native: YES 00:05:12.404 Checking for size of "void *" : 8 00:05:12.404 Checking for size of "void *" : 8 (cached) 00:05:12.404 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:12.404 Library m found: YES 00:05:12.404 Library numa found: YES 00:05:12.404 Has header "numaif.h" : YES 00:05:12.404 Library fdt found: NO 00:05:12.404 Library execinfo found: NO 00:05:12.404 Has header "execinfo.h" : YES 00:05:12.404 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:12.404 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:12.404 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:12.404 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:12.404 Run-time dependency openssl found: YES 3.1.1 00:05:12.404 Run-time dependency libpcap found: YES 1.10.4 00:05:12.404 Has header "pcap.h" with dependency libpcap: YES 00:05:12.404 Compiler for C supports arguments -Wcast-qual: YES 00:05:12.404 Compiler for C supports arguments -Wdeprecated: YES 00:05:12.404 Compiler for C supports arguments -Wformat: YES 00:05:12.404 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:12.404 Compiler for C supports arguments -Wformat-security: NO 00:05:12.404 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:12.404 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:12.405 Compiler for C supports arguments -Wnested-externs: YES 00:05:12.405 Compiler for C supports arguments -Wold-style-definition: YES 00:05:12.405 Compiler for C supports arguments -Wpointer-arith: YES 00:05:12.405 Compiler for C supports arguments -Wsign-compare: YES 00:05:12.405 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:12.405 Compiler for C supports arguments -Wundef: YES 00:05:12.405 Compiler for C supports arguments -Wwrite-strings: YES 00:05:12.405 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:12.405 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:12.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:12.405 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:12.405 Program objdump found: YES (/usr/bin/objdump) 00:05:12.405 Compiler for C supports arguments -mavx512f: YES 00:05:12.405 Checking if "AVX512 checking" compiles: YES 00:05:12.405 Fetching value of define "__SSE4_2__" : 1 00:05:12.405 Fetching value of define "__AES__" : 1 00:05:12.405 Fetching value of define "__AVX__" : 1 00:05:12.405 Fetching value of define "__AVX2__" : 1 00:05:12.405 Fetching value of define "__AVX512BW__" : 1 00:05:12.405 Fetching value of define "__AVX512CD__" : 1 00:05:12.405 Fetching value of define "__AVX512DQ__" : 1 00:05:12.405 Fetching value of define "__AVX512F__" : 1 00:05:12.405 Fetching value of define "__AVX512VL__" : 1 00:05:12.405 Fetching value of define "__PCLMUL__" : 1 00:05:12.405 Fetching value of define "__RDRND__" : 1 00:05:12.405 Fetching value of define "__RDSEED__" : 1 00:05:12.405 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:12.405 Fetching value of define "__znver1__" : (undefined) 00:05:12.405 Fetching value of define "__znver2__" : (undefined) 00:05:12.405 Fetching value of define "__znver3__" : (undefined) 00:05:12.405 Fetching value of define "__znver4__" : (undefined) 00:05:12.405 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:12.405 Message: lib/log: Defining dependency "log" 00:05:12.405 Message: lib/kvargs: Defining dependency "kvargs" 00:05:12.405 Message: lib/telemetry: Defining dependency "telemetry" 00:05:12.405 Checking for function "getentropy" : NO 00:05:12.405 Message: lib/eal: Defining dependency "eal" 00:05:12.405 Message: lib/ring: Defining dependency "ring" 00:05:12.405 Message: lib/rcu: Defining dependency "rcu" 00:05:12.405 Message: lib/mempool: Defining dependency "mempool" 00:05:12.405 Message: lib/mbuf: Defining dependency "mbuf" 00:05:12.405 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:12.405 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:12.405 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:12.405 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:12.405 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:12.405 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:12.405 Compiler for C supports arguments -mpclmul: YES 00:05:12.405 Compiler for C supports arguments -maes: YES 00:05:12.405 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:12.405 Compiler for C supports arguments -mavx512bw: YES 00:05:12.405 Compiler for C supports arguments -mavx512dq: YES 00:05:12.405 Compiler for C supports arguments -mavx512vl: YES 00:05:12.405 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:12.405 Compiler for C supports arguments -mavx2: YES 00:05:12.405 Compiler for C supports arguments -mavx: YES 00:05:12.405 Message: lib/net: Defining dependency "net" 00:05:12.405 Message: lib/meter: Defining dependency "meter" 00:05:12.405 Message: lib/ethdev: Defining dependency "ethdev" 00:05:12.405 Message: lib/pci: Defining dependency "pci" 00:05:12.405 Message: lib/cmdline: Defining dependency "cmdline" 00:05:12.405 Message: lib/hash: Defining dependency "hash" 00:05:12.405 Message: lib/timer: Defining dependency "timer" 00:05:12.405 Message: lib/compressdev: Defining dependency "compressdev" 00:05:12.405 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:12.405 Message: lib/dmadev: Defining dependency "dmadev" 00:05:12.405 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:12.405 Message: lib/power: Defining dependency "power" 00:05:12.405 Message: lib/reorder: Defining dependency "reorder" 00:05:12.405 Message: lib/security: Defining dependency "security" 00:05:12.405 Has header "linux/userfaultfd.h" : YES 00:05:12.405 Has header "linux/vduse.h" : YES 00:05:12.405 Message: lib/vhost: Defining dependency "vhost" 00:05:12.405 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:12.405 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:12.405 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:12.405 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:12.405 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:12.405 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:12.405 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:12.405 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:12.405 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:12.405 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:12.405 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:12.405 Configuring doxy-api-html.conf using configuration 00:05:12.405 Configuring doxy-api-man.conf using configuration 00:05:12.405 Program mandb found: YES (/usr/bin/mandb) 00:05:12.405 Program sphinx-build found: NO 00:05:12.405 Configuring rte_build_config.h using configuration 00:05:12.405 Message: 00:05:12.405 ================= 00:05:12.405 Applications Enabled 00:05:12.405 ================= 00:05:12.405 00:05:12.405 apps: 00:05:12.405 00:05:12.405 00:05:12.405 Message: 00:05:12.405 ================= 00:05:12.405 Libraries Enabled 00:05:12.405 ================= 00:05:12.405 00:05:12.405 libs: 00:05:12.405 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:12.405 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:12.405 cryptodev, dmadev, power, reorder, security, vhost, 00:05:12.405 00:05:12.405 Message: 00:05:12.405 =============== 00:05:12.405 Drivers Enabled 00:05:12.405 =============== 00:05:12.405 00:05:12.405 common: 00:05:12.405 00:05:12.405 bus: 00:05:12.405 pci, vdev, 00:05:12.405 mempool: 00:05:12.405 ring, 00:05:12.405 dma: 00:05:12.405 00:05:12.405 net: 00:05:12.405 00:05:12.405 crypto: 00:05:12.405 00:05:12.405 compress: 00:05:12.405 00:05:12.405 vdpa: 00:05:12.405 00:05:12.405 00:05:12.405 Message: 00:05:12.405 ================= 00:05:12.405 Content Skipped 00:05:12.405 ================= 00:05:12.405 00:05:12.405 apps: 00:05:12.405 dumpcap: explicitly disabled via build config 00:05:12.405 graph: explicitly disabled via build config 00:05:12.405 pdump: explicitly disabled via build config 00:05:12.405 proc-info: explicitly disabled via build config 00:05:12.405 test-acl: explicitly disabled via build config 00:05:12.405 test-bbdev: explicitly disabled via build config 00:05:12.405 test-cmdline: explicitly disabled via build config 00:05:12.405 test-compress-perf: explicitly disabled via build config 00:05:12.405 test-crypto-perf: explicitly disabled via build config 00:05:12.405 test-dma-perf: explicitly disabled via build config 00:05:12.405 test-eventdev: explicitly disabled via build config 00:05:12.405 test-fib: explicitly disabled via build config 00:05:12.405 test-flow-perf: explicitly disabled via build config 00:05:12.405 test-gpudev: explicitly disabled via build config 00:05:12.405 test-mldev: explicitly disabled via build config 00:05:12.405 test-pipeline: explicitly disabled via build config 00:05:12.405 test-pmd: explicitly disabled via build config 00:05:12.405 test-regex: explicitly disabled via build config 00:05:12.405 test-sad: explicitly disabled via build config 00:05:12.405 test-security-perf: explicitly disabled via build config 00:05:12.405 00:05:12.405 libs: 00:05:12.405 argparse: explicitly disabled via build config 00:05:12.405 metrics: explicitly disabled via build config 00:05:12.405 acl: explicitly disabled via build config 00:05:12.405 bbdev: explicitly disabled via build config 00:05:12.405 bitratestats: explicitly disabled via build config 00:05:12.405 bpf: explicitly disabled via build config 00:05:12.405 cfgfile: explicitly disabled via build config 00:05:12.405 distributor: explicitly disabled via build config 00:05:12.405 efd: explicitly disabled via build config 00:05:12.405 eventdev: explicitly disabled via build config 00:05:12.405 dispatcher: explicitly disabled via build config 00:05:12.405 gpudev: explicitly disabled via build config 00:05:12.405 gro: explicitly disabled via build config 00:05:12.405 gso: explicitly disabled via build config 00:05:12.405 ip_frag: explicitly disabled via build config 00:05:12.405 jobstats: explicitly disabled via build config 00:05:12.405 latencystats: explicitly disabled via build config 00:05:12.405 lpm: explicitly disabled via build config 00:05:12.405 member: explicitly disabled via build config 00:05:12.405 pcapng: explicitly disabled via build config 00:05:12.405 rawdev: explicitly disabled via build config 00:05:12.405 regexdev: explicitly disabled via build config 00:05:12.405 mldev: explicitly disabled via build config 00:05:12.405 rib: explicitly disabled via build config 00:05:12.405 sched: explicitly disabled via build config 00:05:12.405 stack: explicitly disabled via build config 00:05:12.405 ipsec: explicitly disabled via build config 00:05:12.405 pdcp: explicitly disabled via build config 00:05:12.405 fib: explicitly disabled via build config 00:05:12.405 port: explicitly disabled via build config 00:05:12.405 pdump: explicitly disabled via build config 00:05:12.405 table: explicitly disabled via build config 00:05:12.405 pipeline: explicitly disabled via build config 00:05:12.405 graph: explicitly disabled via build config 00:05:12.405 node: explicitly disabled via build config 00:05:12.405 00:05:12.405 drivers: 00:05:12.405 common/cpt: not in enabled drivers build config 00:05:12.405 common/dpaax: not in enabled drivers build config 00:05:12.405 common/iavf: not in enabled drivers build config 00:05:12.405 common/idpf: not in enabled drivers build config 00:05:12.405 common/ionic: not in enabled drivers build config 00:05:12.405 common/mvep: not in enabled drivers build config 00:05:12.405 common/octeontx: not in enabled drivers build config 00:05:12.405 bus/auxiliary: not in enabled drivers build config 00:05:12.405 bus/cdx: not in enabled drivers build config 00:05:12.405 bus/dpaa: not in enabled drivers build config 00:05:12.405 bus/fslmc: not in enabled drivers build config 00:05:12.405 bus/ifpga: not in enabled drivers build config 00:05:12.405 bus/platform: not in enabled drivers build config 00:05:12.405 bus/uacce: not in enabled drivers build config 00:05:12.405 bus/vmbus: not in enabled drivers build config 00:05:12.405 common/cnxk: not in enabled drivers build config 00:05:12.405 common/mlx5: not in enabled drivers build config 00:05:12.405 common/nfp: not in enabled drivers build config 00:05:12.405 common/nitrox: not in enabled drivers build config 00:05:12.405 common/qat: not in enabled drivers build config 00:05:12.405 common/sfc_efx: not in enabled drivers build config 00:05:12.405 mempool/bucket: not in enabled drivers build config 00:05:12.405 mempool/cnxk: not in enabled drivers build config 00:05:12.405 mempool/dpaa: not in enabled drivers build config 00:05:12.405 mempool/dpaa2: not in enabled drivers build config 00:05:12.405 mempool/octeontx: not in enabled drivers build config 00:05:12.405 mempool/stack: not in enabled drivers build config 00:05:12.405 dma/cnxk: not in enabled drivers build config 00:05:12.405 dma/dpaa: not in enabled drivers build config 00:05:12.405 dma/dpaa2: not in enabled drivers build config 00:05:12.405 dma/hisilicon: not in enabled drivers build config 00:05:12.405 dma/idxd: not in enabled drivers build config 00:05:12.405 dma/ioat: not in enabled drivers build config 00:05:12.405 dma/skeleton: not in enabled drivers build config 00:05:12.405 net/af_packet: not in enabled drivers build config 00:05:12.405 net/af_xdp: not in enabled drivers build config 00:05:12.405 net/ark: not in enabled drivers build config 00:05:12.405 net/atlantic: not in enabled drivers build config 00:05:12.405 net/avp: not in enabled drivers build config 00:05:12.406 net/axgbe: not in enabled drivers build config 00:05:12.406 net/bnx2x: not in enabled drivers build config 00:05:12.406 net/bnxt: not in enabled drivers build config 00:05:12.406 net/bonding: not in enabled drivers build config 00:05:12.406 net/cnxk: not in enabled drivers build config 00:05:12.406 net/cpfl: not in enabled drivers build config 00:05:12.406 net/cxgbe: not in enabled drivers build config 00:05:12.406 net/dpaa: not in enabled drivers build config 00:05:12.406 net/dpaa2: not in enabled drivers build config 00:05:12.406 net/e1000: not in enabled drivers build config 00:05:12.406 net/ena: not in enabled drivers build config 00:05:12.406 net/enetc: not in enabled drivers build config 00:05:12.406 net/enetfec: not in enabled drivers build config 00:05:12.406 net/enic: not in enabled drivers build config 00:05:12.406 net/failsafe: not in enabled drivers build config 00:05:12.406 net/fm10k: not in enabled drivers build config 00:05:12.406 net/gve: not in enabled drivers build config 00:05:12.406 net/hinic: not in enabled drivers build config 00:05:12.406 net/hns3: not in enabled drivers build config 00:05:12.406 net/i40e: not in enabled drivers build config 00:05:12.406 net/iavf: not in enabled drivers build config 00:05:12.406 net/ice: not in enabled drivers build config 00:05:12.406 net/idpf: not in enabled drivers build config 00:05:12.406 net/igc: not in enabled drivers build config 00:05:12.406 net/ionic: not in enabled drivers build config 00:05:12.406 net/ipn3ke: not in enabled drivers build config 00:05:12.406 net/ixgbe: not in enabled drivers build config 00:05:12.406 net/mana: not in enabled drivers build config 00:05:12.406 net/memif: not in enabled drivers build config 00:05:12.406 net/mlx4: not in enabled drivers build config 00:05:12.406 net/mlx5: not in enabled drivers build config 00:05:12.406 net/mvneta: not in enabled drivers build config 00:05:12.406 net/mvpp2: not in enabled drivers build config 00:05:12.406 net/netvsc: not in enabled drivers build config 00:05:12.406 net/nfb: not in enabled drivers build config 00:05:12.406 net/nfp: not in enabled drivers build config 00:05:12.406 net/ngbe: not in enabled drivers build config 00:05:12.406 net/null: not in enabled drivers build config 00:05:12.406 net/octeontx: not in enabled drivers build config 00:05:12.406 net/octeon_ep: not in enabled drivers build config 00:05:12.406 net/pcap: not in enabled drivers build config 00:05:12.406 net/pfe: not in enabled drivers build config 00:05:12.406 net/qede: not in enabled drivers build config 00:05:12.406 net/ring: not in enabled drivers build config 00:05:12.406 net/sfc: not in enabled drivers build config 00:05:12.406 net/softnic: not in enabled drivers build config 00:05:12.406 net/tap: not in enabled drivers build config 00:05:12.406 net/thunderx: not in enabled drivers build config 00:05:12.406 net/txgbe: not in enabled drivers build config 00:05:12.406 net/vdev_netvsc: not in enabled drivers build config 00:05:12.406 net/vhost: not in enabled drivers build config 00:05:12.406 net/virtio: not in enabled drivers build config 00:05:12.406 net/vmxnet3: not in enabled drivers build config 00:05:12.406 raw/*: missing internal dependency, "rawdev" 00:05:12.406 crypto/armv8: not in enabled drivers build config 00:05:12.406 crypto/bcmfs: not in enabled drivers build config 00:05:12.406 crypto/caam_jr: not in enabled drivers build config 00:05:12.406 crypto/ccp: not in enabled drivers build config 00:05:12.406 crypto/cnxk: not in enabled drivers build config 00:05:12.406 crypto/dpaa_sec: not in enabled drivers build config 00:05:12.406 crypto/dpaa2_sec: not in enabled drivers build config 00:05:12.406 crypto/ipsec_mb: not in enabled drivers build config 00:05:12.406 crypto/mlx5: not in enabled drivers build config 00:05:12.406 crypto/mvsam: not in enabled drivers build config 00:05:12.406 crypto/nitrox: not in enabled drivers build config 00:05:12.406 crypto/null: not in enabled drivers build config 00:05:12.406 crypto/octeontx: not in enabled drivers build config 00:05:12.406 crypto/openssl: not in enabled drivers build config 00:05:12.406 crypto/scheduler: not in enabled drivers build config 00:05:12.406 crypto/uadk: not in enabled drivers build config 00:05:12.406 crypto/virtio: not in enabled drivers build config 00:05:12.406 compress/isal: not in enabled drivers build config 00:05:12.406 compress/mlx5: not in enabled drivers build config 00:05:12.406 compress/nitrox: not in enabled drivers build config 00:05:12.406 compress/octeontx: not in enabled drivers build config 00:05:12.406 compress/zlib: not in enabled drivers build config 00:05:12.406 regex/*: missing internal dependency, "regexdev" 00:05:12.406 ml/*: missing internal dependency, "mldev" 00:05:12.406 vdpa/ifc: not in enabled drivers build config 00:05:12.406 vdpa/mlx5: not in enabled drivers build config 00:05:12.406 vdpa/nfp: not in enabled drivers build config 00:05:12.406 vdpa/sfc: not in enabled drivers build config 00:05:12.406 event/*: missing internal dependency, "eventdev" 00:05:12.406 baseband/*: missing internal dependency, "bbdev" 00:05:12.406 gpu/*: missing internal dependency, "gpudev" 00:05:12.406 00:05:12.406 00:05:12.406 Build targets in project: 85 00:05:12.406 00:05:12.406 DPDK 24.03.0 00:05:12.406 00:05:12.406 User defined options 00:05:12.406 buildtype : debug 00:05:12.406 default_library : shared 00:05:12.406 libdir : lib 00:05:12.406 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:12.406 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:12.406 c_link_args : 00:05:12.406 cpu_instruction_set: native 00:05:12.406 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:12.406 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:12.406 enable_docs : false 00:05:12.406 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:12.406 enable_kmods : false 00:05:12.406 max_lcores : 128 00:05:12.406 tests : false 00:05:12.406 00:05:12.406 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:12.975 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:12.975 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:12.975 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:12.975 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:12.975 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:12.975 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:12.975 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:12.975 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:12.975 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:12.975 [9/268] Linking static target lib/librte_kvargs.a 00:05:12.975 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:12.975 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:12.975 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:12.975 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:12.975 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:12.975 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:12.975 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:12.975 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:12.975 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:12.975 [19/268] Linking static target lib/librte_log.a 00:05:13.231 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:13.231 [21/268] Linking static target lib/librte_pci.a 00:05:13.231 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:13.231 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:13.231 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:13.231 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:13.489 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:13.489 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:13.489 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:13.489 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:13.489 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:13.489 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:13.489 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:13.489 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:13.489 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:13.489 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:13.489 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:13.489 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:13.489 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:13.489 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:13.489 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:13.489 [41/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:13.489 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:13.489 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:13.489 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:13.489 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:13.489 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:13.489 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:13.489 [48/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.489 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:13.489 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:13.489 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:13.489 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:13.489 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:13.489 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:13.489 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:13.489 [56/268] Linking static target lib/librte_meter.a 00:05:13.489 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:13.489 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:13.489 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:13.489 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:13.489 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:13.489 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:13.489 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:13.489 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:13.489 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:13.489 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:13.489 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:13.489 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:13.489 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:13.489 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:13.489 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:13.489 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:13.489 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.489 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.489 [75/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:13.489 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:13.489 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:13.489 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:13.489 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:13.489 [80/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:13.489 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:13.489 [82/268] Linking static target lib/librte_ring.a 00:05:13.489 [83/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:13.489 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:13.489 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:13.489 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:13.489 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:13.489 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:13.489 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:13.489 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:13.489 [91/268] Linking static target lib/librte_telemetry.a 00:05:13.489 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:13.489 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:13.489 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.489 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:13.747 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:13.747 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:13.747 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:13.747 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:13.747 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:13.747 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:13.747 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.747 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:13.747 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:13.747 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:13.747 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:13.747 [107/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:13.747 [108/268] Linking static target lib/librte_mempool.a 00:05:13.747 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:13.747 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:13.747 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:13.747 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:13.747 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:13.747 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:13.747 [115/268] Linking static target lib/librte_net.a 00:05:13.747 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:13.747 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:13.747 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:13.747 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:13.747 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:13.747 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:13.747 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:13.747 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:13.747 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:13.747 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:13.747 [126/268] Linking static target lib/librte_rcu.a 00:05:13.747 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:13.747 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:13.747 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:13.747 [130/268] Linking static target lib/librte_eal.a 00:05:13.747 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:13.747 [132/268] Linking static target lib/librte_cmdline.a 00:05:13.747 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:13.747 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:13.747 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:13.747 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.747 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.006 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.006 [139/268] Linking target lib/librte_log.so.24.1 00:05:14.006 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:14.006 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:14.006 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:14.006 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:14.006 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:14.006 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.006 [146/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:14.006 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:14.006 [148/268] Linking static target lib/librte_mbuf.a 00:05:14.006 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:14.006 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:14.006 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:14.006 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:14.006 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:14.006 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:14.006 [155/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:14.006 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:14.006 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:14.006 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:14.006 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:14.006 [160/268] Linking target lib/librte_kvargs.so.24.1 00:05:14.006 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.006 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:14.006 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:14.006 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:14.006 [165/268] Linking static target lib/librte_dmadev.a 00:05:14.006 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:14.006 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:14.006 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:14.006 [169/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:14.006 [170/268] Linking static target lib/librte_timer.a 00:05:14.006 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.006 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:14.006 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:14.006 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:14.006 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:14.265 [176/268] Linking static target lib/librte_power.a 00:05:14.265 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:14.265 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:14.265 [179/268] Linking static target lib/librte_compressdev.a 00:05:14.265 [180/268] Linking target lib/librte_telemetry.so.24.1 00:05:14.265 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:14.265 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:14.265 [183/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:14.265 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:14.265 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:14.265 [186/268] Linking static target lib/librte_security.a 00:05:14.265 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:14.265 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:14.265 [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:14.265 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:14.265 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.265 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.265 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:14.265 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:14.265 [195/268] Linking static target drivers/librte_bus_vdev.a 00:05:14.265 [196/268] Linking static target lib/librte_reorder.a 00:05:14.265 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:14.265 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:14.265 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:14.525 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:14.525 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:14.525 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:14.525 [203/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:14.525 [204/268] Linking static target lib/librte_hash.a 00:05:14.525 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:14.525 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:14.525 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.525 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.525 [209/268] Linking static target drivers/librte_mempool_ring.a 00:05:14.525 [210/268] Linking static target drivers/librte_bus_pci.a 00:05:14.525 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:14.525 [212/268] Linking static target lib/librte_cryptodev.a 00:05:14.525 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.525 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.525 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:14.783 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [221/268] Linking static target lib/librte_ethdev.a 00:05:14.783 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.783 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.040 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.040 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:15.296 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.296 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.227 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:16.227 [229/268] Linking static target lib/librte_vhost.a 00:05:16.227 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.128 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.392 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.650 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.650 [234/268] Linking target lib/librte_eal.so.24.1 00:05:23.908 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:23.908 [236/268] Linking target lib/librte_ring.so.24.1 00:05:23.908 [237/268] Linking target lib/librte_pci.so.24.1 00:05:23.908 [238/268] Linking target lib/librte_meter.so.24.1 00:05:23.908 [239/268] Linking target lib/librte_timer.so.24.1 00:05:23.908 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:23.908 [241/268] Linking target lib/librte_dmadev.so.24.1 00:05:24.166 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:24.166 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:24.166 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:24.166 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:24.166 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:24.166 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:24.166 [248/268] Linking target lib/librte_mempool.so.24.1 00:05:24.166 [249/268] Linking target lib/librte_rcu.so.24.1 00:05:24.166 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:24.166 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:24.423 [252/268] Linking target lib/librte_mbuf.so.24.1 00:05:24.423 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:24.423 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:24.423 [255/268] Linking target lib/librte_net.so.24.1 00:05:24.423 [256/268] Linking target lib/librte_reorder.so.24.1 00:05:24.423 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:05:24.423 [258/268] Linking target lib/librte_compressdev.so.24.1 00:05:24.682 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:24.682 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:24.682 [261/268] Linking target lib/librte_security.so.24.1 00:05:24.682 [262/268] Linking target lib/librte_hash.so.24.1 00:05:24.682 [263/268] Linking target lib/librte_cmdline.so.24.1 00:05:24.682 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:24.940 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:24.940 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:24.940 [267/268] Linking target lib/librte_power.so.24.1 00:05:24.940 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:24.940 INFO: autodetecting backend as ninja 00:05:24.940 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:05:34.911 CC lib/ut_mock/mock.o 00:05:34.911 CC lib/log/log.o 00:05:34.911 CC lib/ut/ut.o 00:05:34.911 CC lib/log/log_flags.o 00:05:34.911 CC lib/log/log_deprecated.o 00:05:35.169 LIB libspdk_ut_mock.a 00:05:35.169 LIB libspdk_ut.a 00:05:35.169 LIB libspdk_log.a 00:05:35.169 SO libspdk_ut.so.2.0 00:05:35.169 SO libspdk_ut_mock.so.6.0 00:05:35.169 SO libspdk_log.so.7.1 00:05:35.169 SYMLINK libspdk_ut.so 00:05:35.169 SYMLINK libspdk_ut_mock.so 00:05:35.169 SYMLINK libspdk_log.so 00:05:35.429 CC lib/util/base64.o 00:05:35.429 CC lib/util/bit_array.o 00:05:35.429 CC lib/ioat/ioat.o 00:05:35.429 CC lib/util/cpuset.o 00:05:35.429 CC lib/util/crc16.o 00:05:35.429 CC lib/dma/dma.o 00:05:35.429 CC lib/util/crc32.o 00:05:35.429 CC lib/util/crc32c.o 00:05:35.429 CC lib/util/crc32_ieee.o 00:05:35.429 CC lib/util/crc64.o 00:05:35.429 CXX lib/trace_parser/trace.o 00:05:35.429 CC lib/util/dif.o 00:05:35.429 CC lib/util/fd.o 00:05:35.429 CC lib/util/fd_group.o 00:05:35.429 CC lib/util/file.o 00:05:35.429 CC lib/util/hexlify.o 00:05:35.429 CC lib/util/iov.o 00:05:35.429 CC lib/util/math.o 00:05:35.429 CC lib/util/net.o 00:05:35.687 CC lib/util/pipe.o 00:05:35.687 CC lib/util/strerror_tls.o 00:05:35.687 CC lib/util/string.o 00:05:35.687 CC lib/util/uuid.o 00:05:35.687 CC lib/util/xor.o 00:05:35.687 CC lib/util/zipf.o 00:05:35.687 CC lib/util/md5.o 00:05:35.687 CC lib/vfio_user/host/vfio_user_pci.o 00:05:35.687 CC lib/vfio_user/host/vfio_user.o 00:05:35.687 LIB libspdk_dma.a 00:05:35.945 SO libspdk_dma.so.5.0 00:05:35.945 LIB libspdk_ioat.a 00:05:35.945 SYMLINK libspdk_dma.so 00:05:35.945 SO libspdk_ioat.so.7.0 00:05:35.945 SYMLINK libspdk_ioat.so 00:05:35.945 LIB libspdk_vfio_user.a 00:05:35.945 SO libspdk_vfio_user.so.5.0 00:05:35.945 SYMLINK libspdk_vfio_user.so 00:05:35.945 LIB libspdk_util.a 00:05:36.202 SO libspdk_util.so.10.1 00:05:36.202 SYMLINK libspdk_util.so 00:05:36.202 LIB libspdk_trace_parser.a 00:05:36.202 SO libspdk_trace_parser.so.6.0 00:05:36.460 SYMLINK libspdk_trace_parser.so 00:05:36.460 CC lib/rdma_utils/rdma_utils.o 00:05:36.460 CC lib/idxd/idxd.o 00:05:36.460 CC lib/idxd/idxd_user.o 00:05:36.460 CC lib/idxd/idxd_kernel.o 00:05:36.460 CC lib/conf/conf.o 00:05:36.460 CC lib/json/json_parse.o 00:05:36.460 CC lib/json/json_util.o 00:05:36.460 CC lib/vmd/vmd.o 00:05:36.460 CC lib/vmd/led.o 00:05:36.460 CC lib/json/json_write.o 00:05:36.460 CC lib/env_dpdk/env.o 00:05:36.460 CC lib/env_dpdk/memory.o 00:05:36.460 CC lib/env_dpdk/pci.o 00:05:36.460 CC lib/env_dpdk/init.o 00:05:36.460 CC lib/env_dpdk/threads.o 00:05:36.460 CC lib/env_dpdk/pci_ioat.o 00:05:36.460 CC lib/env_dpdk/pci_virtio.o 00:05:36.460 CC lib/env_dpdk/pci_vmd.o 00:05:36.460 CC lib/env_dpdk/pci_idxd.o 00:05:36.460 CC lib/env_dpdk/pci_event.o 00:05:36.460 CC lib/env_dpdk/sigbus_handler.o 00:05:36.460 CC lib/env_dpdk/pci_dpdk.o 00:05:36.460 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:36.460 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:36.718 LIB libspdk_conf.a 00:05:36.718 LIB libspdk_rdma_utils.a 00:05:36.718 SO libspdk_conf.so.6.0 00:05:36.718 SO libspdk_rdma_utils.so.1.0 00:05:36.977 LIB libspdk_json.a 00:05:36.977 SYMLINK libspdk_conf.so 00:05:36.977 SYMLINK libspdk_rdma_utils.so 00:05:36.977 SO libspdk_json.so.6.0 00:05:36.977 SYMLINK libspdk_json.so 00:05:36.977 LIB libspdk_idxd.a 00:05:36.977 SO libspdk_idxd.so.12.1 00:05:37.235 LIB libspdk_vmd.a 00:05:37.235 SO libspdk_vmd.so.6.0 00:05:37.235 SYMLINK libspdk_idxd.so 00:05:37.235 CC lib/rdma_provider/common.o 00:05:37.235 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:37.235 SYMLINK libspdk_vmd.so 00:05:37.235 CC lib/jsonrpc/jsonrpc_server.o 00:05:37.235 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:37.235 CC lib/jsonrpc/jsonrpc_client.o 00:05:37.235 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:37.493 LIB libspdk_rdma_provider.a 00:05:37.494 SO libspdk_rdma_provider.so.7.0 00:05:37.494 SYMLINK libspdk_rdma_provider.so 00:05:37.494 LIB libspdk_jsonrpc.a 00:05:37.494 SO libspdk_jsonrpc.so.6.0 00:05:37.494 SYMLINK libspdk_jsonrpc.so 00:05:37.494 LIB libspdk_env_dpdk.a 00:05:37.755 SO libspdk_env_dpdk.so.15.1 00:05:37.755 SYMLINK libspdk_env_dpdk.so 00:05:38.017 CC lib/rpc/rpc.o 00:05:38.017 LIB libspdk_rpc.a 00:05:38.017 SO libspdk_rpc.so.6.0 00:05:38.277 SYMLINK libspdk_rpc.so 00:05:38.534 CC lib/trace/trace.o 00:05:38.534 CC lib/trace/trace_flags.o 00:05:38.534 CC lib/trace/trace_rpc.o 00:05:38.534 CC lib/notify/notify.o 00:05:38.534 CC lib/notify/notify_rpc.o 00:05:38.534 CC lib/keyring/keyring.o 00:05:38.534 CC lib/keyring/keyring_rpc.o 00:05:38.792 LIB libspdk_notify.a 00:05:38.792 SO libspdk_notify.so.6.0 00:05:38.792 LIB libspdk_trace.a 00:05:38.792 LIB libspdk_keyring.a 00:05:38.792 SYMLINK libspdk_notify.so 00:05:38.792 SO libspdk_trace.so.11.0 00:05:38.792 SO libspdk_keyring.so.2.0 00:05:38.792 SYMLINK libspdk_trace.so 00:05:38.792 SYMLINK libspdk_keyring.so 00:05:39.051 CC lib/sock/sock.o 00:05:39.051 CC lib/thread/thread.o 00:05:39.051 CC lib/sock/sock_rpc.o 00:05:39.051 CC lib/thread/iobuf.o 00:05:39.618 LIB libspdk_sock.a 00:05:39.618 SO libspdk_sock.so.10.0 00:05:39.618 SYMLINK libspdk_sock.so 00:05:39.875 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:39.875 CC lib/nvme/nvme_ctrlr.o 00:05:39.875 CC lib/nvme/nvme_fabric.o 00:05:39.875 CC lib/nvme/nvme_ns_cmd.o 00:05:39.875 CC lib/nvme/nvme_ns.o 00:05:39.875 CC lib/nvme/nvme_pcie_common.o 00:05:39.875 CC lib/nvme/nvme_pcie.o 00:05:39.875 CC lib/nvme/nvme_qpair.o 00:05:39.875 CC lib/nvme/nvme.o 00:05:39.875 CC lib/nvme/nvme_quirks.o 00:05:39.875 CC lib/nvme/nvme_transport.o 00:05:39.875 CC lib/nvme/nvme_discovery.o 00:05:39.875 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:39.876 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:39.876 CC lib/nvme/nvme_tcp.o 00:05:39.876 CC lib/nvme/nvme_opal.o 00:05:39.876 CC lib/nvme/nvme_io_msg.o 00:05:39.876 CC lib/nvme/nvme_poll_group.o 00:05:39.876 CC lib/nvme/nvme_zns.o 00:05:39.876 CC lib/nvme/nvme_stubs.o 00:05:39.876 CC lib/nvme/nvme_auth.o 00:05:39.876 CC lib/nvme/nvme_cuse.o 00:05:39.876 CC lib/nvme/nvme_vfio_user.o 00:05:39.876 CC lib/nvme/nvme_rdma.o 00:05:40.133 LIB libspdk_thread.a 00:05:40.133 SO libspdk_thread.so.11.0 00:05:40.392 SYMLINK libspdk_thread.so 00:05:40.649 CC lib/fsdev/fsdev.o 00:05:40.649 CC lib/fsdev/fsdev_io.o 00:05:40.649 CC lib/fsdev/fsdev_rpc.o 00:05:40.649 CC lib/vfu_tgt/tgt_endpoint.o 00:05:40.649 CC lib/vfu_tgt/tgt_rpc.o 00:05:40.649 CC lib/blob/blobstore.o 00:05:40.649 CC lib/blob/request.o 00:05:40.649 CC lib/blob/zeroes.o 00:05:40.649 CC lib/blob/blob_bs_dev.o 00:05:40.649 CC lib/virtio/virtio.o 00:05:40.649 CC lib/virtio/virtio_vhost_user.o 00:05:40.649 CC lib/virtio/virtio_vfio_user.o 00:05:40.649 CC lib/virtio/virtio_pci.o 00:05:40.649 CC lib/accel/accel_rpc.o 00:05:40.649 CC lib/accel/accel.o 00:05:40.649 CC lib/accel/accel_sw.o 00:05:40.649 CC lib/init/json_config.o 00:05:40.649 CC lib/init/subsystem.o 00:05:40.649 CC lib/init/rpc.o 00:05:40.649 CC lib/init/subsystem_rpc.o 00:05:40.907 LIB libspdk_init.a 00:05:40.907 SO libspdk_init.so.6.0 00:05:40.907 LIB libspdk_vfu_tgt.a 00:05:40.907 LIB libspdk_virtio.a 00:05:40.907 SO libspdk_vfu_tgt.so.3.0 00:05:40.907 SO libspdk_virtio.so.7.0 00:05:40.907 SYMLINK libspdk_init.so 00:05:40.907 SYMLINK libspdk_vfu_tgt.so 00:05:40.907 SYMLINK libspdk_virtio.so 00:05:41.164 LIB libspdk_fsdev.a 00:05:41.164 SO libspdk_fsdev.so.2.0 00:05:41.164 SYMLINK libspdk_fsdev.so 00:05:41.164 CC lib/event/app.o 00:05:41.164 CC lib/event/reactor.o 00:05:41.164 CC lib/event/log_rpc.o 00:05:41.164 CC lib/event/app_rpc.o 00:05:41.164 CC lib/event/scheduler_static.o 00:05:41.421 LIB libspdk_accel.a 00:05:41.421 SO libspdk_accel.so.16.0 00:05:41.421 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:41.678 SYMLINK libspdk_accel.so 00:05:41.678 LIB libspdk_nvme.a 00:05:41.678 LIB libspdk_event.a 00:05:41.678 SO libspdk_event.so.14.0 00:05:41.678 SO libspdk_nvme.so.15.0 00:05:41.678 SYMLINK libspdk_event.so 00:05:41.935 CC lib/bdev/bdev.o 00:05:41.935 CC lib/bdev/bdev_rpc.o 00:05:41.935 CC lib/bdev/bdev_zone.o 00:05:41.935 CC lib/bdev/part.o 00:05:41.935 CC lib/bdev/scsi_nvme.o 00:05:41.935 SYMLINK libspdk_nvme.so 00:05:41.935 LIB libspdk_fuse_dispatcher.a 00:05:41.935 SO libspdk_fuse_dispatcher.so.1.0 00:05:42.192 SYMLINK libspdk_fuse_dispatcher.so 00:05:42.757 LIB libspdk_blob.a 00:05:42.757 SO libspdk_blob.so.12.0 00:05:43.015 SYMLINK libspdk_blob.so 00:05:43.274 CC lib/blobfs/blobfs.o 00:05:43.274 CC lib/blobfs/tree.o 00:05:43.274 CC lib/lvol/lvol.o 00:05:43.841 LIB libspdk_bdev.a 00:05:43.841 SO libspdk_bdev.so.17.0 00:05:43.841 LIB libspdk_blobfs.a 00:05:43.841 SO libspdk_blobfs.so.11.0 00:05:43.841 SYMLINK libspdk_bdev.so 00:05:43.841 LIB libspdk_lvol.a 00:05:43.841 SYMLINK libspdk_blobfs.so 00:05:43.841 SO libspdk_lvol.so.11.0 00:05:44.100 SYMLINK libspdk_lvol.so 00:05:44.100 CC lib/ublk/ublk.o 00:05:44.100 CC lib/ublk/ublk_rpc.o 00:05:44.100 CC lib/scsi/dev.o 00:05:44.100 CC lib/scsi/lun.o 00:05:44.100 CC lib/scsi/port.o 00:05:44.100 CC lib/scsi/scsi.o 00:05:44.100 CC lib/scsi/scsi_bdev.o 00:05:44.100 CC lib/scsi/scsi_pr.o 00:05:44.100 CC lib/scsi/scsi_rpc.o 00:05:44.100 CC lib/nvmf/ctrlr.o 00:05:44.100 CC lib/scsi/task.o 00:05:44.100 CC lib/nbd/nbd.o 00:05:44.100 CC lib/nvmf/ctrlr_discovery.o 00:05:44.100 CC lib/nbd/nbd_rpc.o 00:05:44.100 CC lib/nvmf/ctrlr_bdev.o 00:05:44.100 CC lib/ftl/ftl_core.o 00:05:44.100 CC lib/nvmf/subsystem.o 00:05:44.100 CC lib/nvmf/nvmf.o 00:05:44.100 CC lib/ftl/ftl_init.o 00:05:44.100 CC lib/ftl/ftl_layout.o 00:05:44.100 CC lib/nvmf/nvmf_rpc.o 00:05:44.100 CC lib/ftl/ftl_debug.o 00:05:44.100 CC lib/nvmf/transport.o 00:05:44.100 CC lib/nvmf/tcp.o 00:05:44.100 CC lib/ftl/ftl_io.o 00:05:44.100 CC lib/nvmf/stubs.o 00:05:44.100 CC lib/ftl/ftl_sb.o 00:05:44.100 CC lib/nvmf/mdns_server.o 00:05:44.100 CC lib/ftl/ftl_l2p.o 00:05:44.100 CC lib/nvmf/vfio_user.o 00:05:44.100 CC lib/ftl/ftl_l2p_flat.o 00:05:44.100 CC lib/nvmf/rdma.o 00:05:44.100 CC lib/nvmf/auth.o 00:05:44.100 CC lib/ftl/ftl_nv_cache.o 00:05:44.100 CC lib/ftl/ftl_band.o 00:05:44.100 CC lib/ftl/ftl_band_ops.o 00:05:44.100 CC lib/ftl/ftl_writer.o 00:05:44.100 CC lib/ftl/ftl_rq.o 00:05:44.100 CC lib/ftl/ftl_reloc.o 00:05:44.100 CC lib/ftl/ftl_l2p_cache.o 00:05:44.100 CC lib/ftl/ftl_p2l.o 00:05:44.100 CC lib/ftl/ftl_p2l_log.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:44.100 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:44.100 CC lib/ftl/utils/ftl_conf.o 00:05:44.100 CC lib/ftl/utils/ftl_mempool.o 00:05:44.100 CC lib/ftl/utils/ftl_md.o 00:05:44.100 CC lib/ftl/utils/ftl_bitmap.o 00:05:44.100 CC lib/ftl/utils/ftl_property.o 00:05:44.100 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:44.100 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:44.100 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:44.100 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:44.100 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:44.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:44.374 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:44.374 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:44.374 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:44.374 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:44.374 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:44.374 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:44.374 CC lib/ftl/ftl_trace.o 00:05:44.374 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:44.374 CC lib/ftl/base/ftl_base_bdev.o 00:05:44.374 CC lib/ftl/base/ftl_base_dev.o 00:05:44.634 LIB libspdk_nbd.a 00:05:44.634 SO libspdk_nbd.so.7.0 00:05:44.893 SYMLINK libspdk_nbd.so 00:05:44.893 LIB libspdk_scsi.a 00:05:44.893 SO libspdk_scsi.so.9.0 00:05:44.893 LIB libspdk_ublk.a 00:05:44.893 SO libspdk_ublk.so.3.0 00:05:44.893 SYMLINK libspdk_scsi.so 00:05:44.893 SYMLINK libspdk_ublk.so 00:05:45.152 LIB libspdk_ftl.a 00:05:45.152 SO libspdk_ftl.so.9.0 00:05:45.152 CC lib/iscsi/conn.o 00:05:45.152 CC lib/iscsi/init_grp.o 00:05:45.152 CC lib/iscsi/iscsi.o 00:05:45.152 CC lib/iscsi/param.o 00:05:45.152 CC lib/iscsi/portal_grp.o 00:05:45.152 CC lib/iscsi/tgt_node.o 00:05:45.152 CC lib/iscsi/iscsi_subsystem.o 00:05:45.152 CC lib/iscsi/iscsi_rpc.o 00:05:45.152 CC lib/iscsi/task.o 00:05:45.152 CC lib/vhost/vhost.o 00:05:45.152 CC lib/vhost/vhost_rpc.o 00:05:45.152 CC lib/vhost/vhost_scsi.o 00:05:45.411 CC lib/vhost/vhost_blk.o 00:05:45.411 CC lib/vhost/rte_vhost_user.o 00:05:45.411 SYMLINK libspdk_ftl.so 00:05:45.977 LIB libspdk_nvmf.a 00:05:45.977 SO libspdk_nvmf.so.20.0 00:05:45.977 LIB libspdk_vhost.a 00:05:46.237 SO libspdk_vhost.so.8.0 00:05:46.237 SYMLINK libspdk_nvmf.so 00:05:46.237 SYMLINK libspdk_vhost.so 00:05:46.237 LIB libspdk_iscsi.a 00:05:46.237 SO libspdk_iscsi.so.8.0 00:05:46.495 SYMLINK libspdk_iscsi.so 00:05:46.844 CC module/env_dpdk/env_dpdk_rpc.o 00:05:46.844 CC module/vfu_device/vfu_virtio.o 00:05:46.844 CC module/vfu_device/vfu_virtio_blk.o 00:05:46.844 CC module/vfu_device/vfu_virtio_rpc.o 00:05:46.844 CC module/vfu_device/vfu_virtio_scsi.o 00:05:46.844 CC module/vfu_device/vfu_virtio_fs.o 00:05:47.124 CC module/accel/ioat/accel_ioat_rpc.o 00:05:47.124 CC module/accel/ioat/accel_ioat.o 00:05:47.124 CC module/accel/error/accel_error.o 00:05:47.124 CC module/accel/error/accel_error_rpc.o 00:05:47.124 CC module/scheduler/gscheduler/gscheduler.o 00:05:47.124 CC module/accel/iaa/accel_iaa.o 00:05:47.124 CC module/accel/iaa/accel_iaa_rpc.o 00:05:47.124 CC module/keyring/file/keyring.o 00:05:47.124 CC module/keyring/file/keyring_rpc.o 00:05:47.124 CC module/accel/dsa/accel_dsa.o 00:05:47.124 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:47.124 CC module/sock/posix/posix.o 00:05:47.124 CC module/accel/dsa/accel_dsa_rpc.o 00:05:47.124 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:47.124 CC module/keyring/linux/keyring.o 00:05:47.124 CC module/keyring/linux/keyring_rpc.o 00:05:47.124 LIB libspdk_env_dpdk_rpc.a 00:05:47.124 CC module/fsdev/aio/fsdev_aio.o 00:05:47.124 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:47.124 CC module/fsdev/aio/linux_aio_mgr.o 00:05:47.124 CC module/blob/bdev/blob_bdev.o 00:05:47.124 SO libspdk_env_dpdk_rpc.so.6.0 00:05:47.124 SYMLINK libspdk_env_dpdk_rpc.so 00:05:47.382 LIB libspdk_scheduler_gscheduler.a 00:05:47.382 LIB libspdk_keyring_file.a 00:05:47.382 LIB libspdk_keyring_linux.a 00:05:47.382 SO libspdk_scheduler_gscheduler.so.4.0 00:05:47.382 LIB libspdk_accel_ioat.a 00:05:47.382 LIB libspdk_scheduler_dpdk_governor.a 00:05:47.382 SO libspdk_keyring_file.so.2.0 00:05:47.382 SO libspdk_keyring_linux.so.1.0 00:05:47.382 LIB libspdk_accel_error.a 00:05:47.382 SO libspdk_accel_ioat.so.6.0 00:05:47.382 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:47.382 LIB libspdk_accel_iaa.a 00:05:47.382 LIB libspdk_scheduler_dynamic.a 00:05:47.382 SYMLINK libspdk_scheduler_gscheduler.so 00:05:47.382 SO libspdk_accel_error.so.2.0 00:05:47.382 SYMLINK libspdk_keyring_file.so 00:05:47.382 SYMLINK libspdk_keyring_linux.so 00:05:47.382 SO libspdk_accel_iaa.so.3.0 00:05:47.382 SO libspdk_scheduler_dynamic.so.4.0 00:05:47.382 LIB libspdk_accel_dsa.a 00:05:47.382 SYMLINK libspdk_accel_ioat.so 00:05:47.382 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:47.382 LIB libspdk_blob_bdev.a 00:05:47.382 SYMLINK libspdk_accel_error.so 00:05:47.382 SYMLINK libspdk_accel_iaa.so 00:05:47.382 SO libspdk_accel_dsa.so.5.0 00:05:47.382 SO libspdk_blob_bdev.so.12.0 00:05:47.382 SYMLINK libspdk_scheduler_dynamic.so 00:05:47.382 LIB libspdk_vfu_device.a 00:05:47.382 SYMLINK libspdk_blob_bdev.so 00:05:47.382 SYMLINK libspdk_accel_dsa.so 00:05:47.639 SO libspdk_vfu_device.so.3.0 00:05:47.639 SYMLINK libspdk_vfu_device.so 00:05:47.639 LIB libspdk_fsdev_aio.a 00:05:47.639 SO libspdk_fsdev_aio.so.1.0 00:05:47.639 LIB libspdk_sock_posix.a 00:05:47.896 SO libspdk_sock_posix.so.6.0 00:05:47.896 SYMLINK libspdk_fsdev_aio.so 00:05:47.896 SYMLINK libspdk_sock_posix.so 00:05:47.896 CC module/bdev/gpt/gpt.o 00:05:47.896 CC module/bdev/gpt/vbdev_gpt.o 00:05:47.896 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:47.896 CC module/bdev/lvol/vbdev_lvol.o 00:05:47.896 CC module/bdev/malloc/bdev_malloc.o 00:05:47.896 CC module/bdev/split/vbdev_split.o 00:05:47.896 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:47.896 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:47.896 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:47.896 CC module/bdev/split/vbdev_split_rpc.o 00:05:47.896 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:47.896 CC module/bdev/error/vbdev_error.o 00:05:47.896 CC module/bdev/error/vbdev_error_rpc.o 00:05:47.896 CC module/bdev/iscsi/bdev_iscsi.o 00:05:47.896 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:47.896 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:47.896 CC module/blobfs/bdev/blobfs_bdev.o 00:05:47.896 CC module/bdev/delay/vbdev_delay.o 00:05:47.896 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:47.896 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:47.896 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:47.896 CC module/bdev/nvme/bdev_nvme.o 00:05:47.896 CC module/bdev/nvme/nvme_rpc.o 00:05:47.896 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:47.896 CC module/bdev/passthru/vbdev_passthru.o 00:05:47.896 CC module/bdev/nvme/bdev_mdns_client.o 00:05:47.896 CC module/bdev/nvme/vbdev_opal.o 00:05:47.896 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:47.896 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:47.896 CC module/bdev/null/bdev_null.o 00:05:47.896 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:47.896 CC module/bdev/null/bdev_null_rpc.o 00:05:47.896 CC module/bdev/aio/bdev_aio_rpc.o 00:05:47.896 CC module/bdev/aio/bdev_aio.o 00:05:47.896 CC module/bdev/raid/bdev_raid.o 00:05:47.896 CC module/bdev/raid/bdev_raid_rpc.o 00:05:47.896 CC module/bdev/raid/bdev_raid_sb.o 00:05:47.896 CC module/bdev/raid/raid0.o 00:05:47.896 CC module/bdev/raid/raid1.o 00:05:47.896 CC module/bdev/raid/concat.o 00:05:47.896 CC module/bdev/ftl/bdev_ftl.o 00:05:47.896 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:48.160 LIB libspdk_blobfs_bdev.a 00:05:48.160 SO libspdk_blobfs_bdev.so.6.0 00:05:48.160 LIB libspdk_bdev_gpt.a 00:05:48.160 LIB libspdk_bdev_split.a 00:05:48.160 SO libspdk_bdev_gpt.so.6.0 00:05:48.418 SO libspdk_bdev_split.so.6.0 00:05:48.418 LIB libspdk_bdev_error.a 00:05:48.418 SYMLINK libspdk_blobfs_bdev.so 00:05:48.418 LIB libspdk_bdev_null.a 00:05:48.418 LIB libspdk_bdev_ftl.a 00:05:48.418 LIB libspdk_bdev_passthru.a 00:05:48.418 SO libspdk_bdev_null.so.6.0 00:05:48.418 SO libspdk_bdev_error.so.6.0 00:05:48.418 SO libspdk_bdev_ftl.so.6.0 00:05:48.418 SYMLINK libspdk_bdev_gpt.so 00:05:48.418 SYMLINK libspdk_bdev_split.so 00:05:48.418 LIB libspdk_bdev_zone_block.a 00:05:48.418 SO libspdk_bdev_passthru.so.6.0 00:05:48.418 LIB libspdk_bdev_iscsi.a 00:05:48.418 LIB libspdk_bdev_aio.a 00:05:48.418 LIB libspdk_bdev_malloc.a 00:05:48.418 SO libspdk_bdev_zone_block.so.6.0 00:05:48.418 SYMLINK libspdk_bdev_error.so 00:05:48.418 SYMLINK libspdk_bdev_null.so 00:05:48.418 LIB libspdk_bdev_delay.a 00:05:48.418 SO libspdk_bdev_iscsi.so.6.0 00:05:48.418 SO libspdk_bdev_aio.so.6.0 00:05:48.418 SYMLINK libspdk_bdev_ftl.so 00:05:48.418 SO libspdk_bdev_malloc.so.6.0 00:05:48.418 SYMLINK libspdk_bdev_passthru.so 00:05:48.418 SO libspdk_bdev_delay.so.6.0 00:05:48.418 SYMLINK libspdk_bdev_zone_block.so 00:05:48.418 SYMLINK libspdk_bdev_iscsi.so 00:05:48.418 SYMLINK libspdk_bdev_malloc.so 00:05:48.418 SYMLINK libspdk_bdev_aio.so 00:05:48.418 LIB libspdk_bdev_virtio.a 00:05:48.418 LIB libspdk_bdev_lvol.a 00:05:48.418 SYMLINK libspdk_bdev_delay.so 00:05:48.418 SO libspdk_bdev_virtio.so.6.0 00:05:48.418 SO libspdk_bdev_lvol.so.6.0 00:05:48.675 SYMLINK libspdk_bdev_virtio.so 00:05:48.675 SYMLINK libspdk_bdev_lvol.so 00:05:48.933 LIB libspdk_bdev_raid.a 00:05:48.933 SO libspdk_bdev_raid.so.6.0 00:05:48.933 SYMLINK libspdk_bdev_raid.so 00:05:49.866 LIB libspdk_bdev_nvme.a 00:05:49.866 SO libspdk_bdev_nvme.so.7.1 00:05:49.866 SYMLINK libspdk_bdev_nvme.so 00:05:50.805 CC module/event/subsystems/iobuf/iobuf.o 00:05:50.805 CC module/event/subsystems/vmd/vmd.o 00:05:50.805 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:50.805 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:50.805 CC module/event/subsystems/sock/sock.o 00:05:50.805 CC module/event/subsystems/keyring/keyring.o 00:05:50.805 CC module/event/subsystems/scheduler/scheduler.o 00:05:50.805 CC module/event/subsystems/fsdev/fsdev.o 00:05:50.805 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:50.805 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:50.805 LIB libspdk_event_sock.a 00:05:50.805 LIB libspdk_event_fsdev.a 00:05:50.805 LIB libspdk_event_vmd.a 00:05:50.805 LIB libspdk_event_keyring.a 00:05:50.805 LIB libspdk_event_iobuf.a 00:05:50.805 LIB libspdk_event_vhost_blk.a 00:05:50.805 LIB libspdk_event_vfu_tgt.a 00:05:50.805 LIB libspdk_event_scheduler.a 00:05:50.805 SO libspdk_event_fsdev.so.1.0 00:05:50.805 SO libspdk_event_sock.so.5.0 00:05:50.805 SO libspdk_event_vmd.so.6.0 00:05:50.805 SO libspdk_event_keyring.so.1.0 00:05:50.805 SO libspdk_event_iobuf.so.3.0 00:05:50.805 SO libspdk_event_vhost_blk.so.3.0 00:05:50.805 SO libspdk_event_vfu_tgt.so.3.0 00:05:50.805 SO libspdk_event_scheduler.so.4.0 00:05:50.805 SYMLINK libspdk_event_fsdev.so 00:05:50.805 SYMLINK libspdk_event_sock.so 00:05:50.805 SYMLINK libspdk_event_vmd.so 00:05:50.805 SYMLINK libspdk_event_vhost_blk.so 00:05:50.805 SYMLINK libspdk_event_keyring.so 00:05:50.805 SYMLINK libspdk_event_iobuf.so 00:05:50.805 SYMLINK libspdk_event_vfu_tgt.so 00:05:50.805 SYMLINK libspdk_event_scheduler.so 00:05:51.064 CC module/event/subsystems/accel/accel.o 00:05:51.323 LIB libspdk_event_accel.a 00:05:51.323 SO libspdk_event_accel.so.6.0 00:05:51.323 SYMLINK libspdk_event_accel.so 00:05:51.582 CC module/event/subsystems/bdev/bdev.o 00:05:51.840 LIB libspdk_event_bdev.a 00:05:51.840 SO libspdk_event_bdev.so.6.0 00:05:51.840 SYMLINK libspdk_event_bdev.so 00:05:52.408 CC module/event/subsystems/scsi/scsi.o 00:05:52.408 CC module/event/subsystems/ublk/ublk.o 00:05:52.408 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:52.408 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:52.408 CC module/event/subsystems/nbd/nbd.o 00:05:52.408 LIB libspdk_event_ublk.a 00:05:52.408 LIB libspdk_event_nbd.a 00:05:52.408 LIB libspdk_event_scsi.a 00:05:52.408 SO libspdk_event_ublk.so.3.0 00:05:52.408 SO libspdk_event_nbd.so.6.0 00:05:52.408 SO libspdk_event_scsi.so.6.0 00:05:52.408 LIB libspdk_event_nvmf.a 00:05:52.408 SYMLINK libspdk_event_ublk.so 00:05:52.408 SYMLINK libspdk_event_nbd.so 00:05:52.408 SO libspdk_event_nvmf.so.6.0 00:05:52.408 SYMLINK libspdk_event_scsi.so 00:05:52.666 SYMLINK libspdk_event_nvmf.so 00:05:52.926 CC module/event/subsystems/iscsi/iscsi.o 00:05:52.926 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:52.926 LIB libspdk_event_vhost_scsi.a 00:05:52.926 LIB libspdk_event_iscsi.a 00:05:52.926 SO libspdk_event_vhost_scsi.so.3.0 00:05:52.926 SO libspdk_event_iscsi.so.6.0 00:05:53.185 SYMLINK libspdk_event_vhost_scsi.so 00:05:53.185 SYMLINK libspdk_event_iscsi.so 00:05:53.185 SO libspdk.so.6.0 00:05:53.185 SYMLINK libspdk.so 00:05:53.760 CC app/trace_record/trace_record.o 00:05:53.760 CXX app/trace/trace.o 00:05:53.760 CC app/spdk_top/spdk_top.o 00:05:53.760 CC app/spdk_lspci/spdk_lspci.o 00:05:53.760 CC app/spdk_nvme_identify/identify.o 00:05:53.760 CC test/rpc_client/rpc_client_test.o 00:05:53.760 TEST_HEADER include/spdk/accel.h 00:05:53.760 CC app/spdk_nvme_discover/discovery_aer.o 00:05:53.761 TEST_HEADER include/spdk/accel_module.h 00:05:53.761 TEST_HEADER include/spdk/assert.h 00:05:53.761 TEST_HEADER include/spdk/barrier.h 00:05:53.761 TEST_HEADER include/spdk/bdev.h 00:05:53.761 TEST_HEADER include/spdk/base64.h 00:05:53.761 TEST_HEADER include/spdk/bdev_module.h 00:05:53.761 TEST_HEADER include/spdk/bdev_zone.h 00:05:53.761 TEST_HEADER include/spdk/bit_array.h 00:05:53.761 TEST_HEADER include/spdk/bit_pool.h 00:05:53.761 CC app/spdk_nvme_perf/perf.o 00:05:53.761 TEST_HEADER include/spdk/blob_bdev.h 00:05:53.761 TEST_HEADER include/spdk/blobfs.h 00:05:53.761 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:53.761 TEST_HEADER include/spdk/blob.h 00:05:53.761 TEST_HEADER include/spdk/conf.h 00:05:53.761 TEST_HEADER include/spdk/cpuset.h 00:05:53.761 TEST_HEADER include/spdk/config.h 00:05:53.761 TEST_HEADER include/spdk/crc32.h 00:05:53.761 TEST_HEADER include/spdk/crc16.h 00:05:53.761 TEST_HEADER include/spdk/crc64.h 00:05:53.761 TEST_HEADER include/spdk/dif.h 00:05:53.761 TEST_HEADER include/spdk/dma.h 00:05:53.761 TEST_HEADER include/spdk/env_dpdk.h 00:05:53.761 TEST_HEADER include/spdk/endian.h 00:05:53.761 TEST_HEADER include/spdk/env.h 00:05:53.761 TEST_HEADER include/spdk/event.h 00:05:53.761 TEST_HEADER include/spdk/fd.h 00:05:53.761 TEST_HEADER include/spdk/fd_group.h 00:05:53.761 TEST_HEADER include/spdk/fsdev.h 00:05:53.761 TEST_HEADER include/spdk/fsdev_module.h 00:05:53.761 TEST_HEADER include/spdk/file.h 00:05:53.761 TEST_HEADER include/spdk/ftl.h 00:05:53.761 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:53.761 TEST_HEADER include/spdk/gpt_spec.h 00:05:53.761 TEST_HEADER include/spdk/histogram_data.h 00:05:53.761 TEST_HEADER include/spdk/hexlify.h 00:05:53.761 TEST_HEADER include/spdk/idxd.h 00:05:53.761 TEST_HEADER include/spdk/init.h 00:05:53.761 TEST_HEADER include/spdk/idxd_spec.h 00:05:53.761 TEST_HEADER include/spdk/ioat.h 00:05:53.761 TEST_HEADER include/spdk/ioat_spec.h 00:05:53.761 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:53.761 TEST_HEADER include/spdk/json.h 00:05:53.761 TEST_HEADER include/spdk/iscsi_spec.h 00:05:53.761 TEST_HEADER include/spdk/jsonrpc.h 00:05:53.761 TEST_HEADER include/spdk/keyring_module.h 00:05:53.761 TEST_HEADER include/spdk/log.h 00:05:53.761 TEST_HEADER include/spdk/keyring.h 00:05:53.761 TEST_HEADER include/spdk/lvol.h 00:05:53.761 TEST_HEADER include/spdk/likely.h 00:05:53.761 TEST_HEADER include/spdk/memory.h 00:05:53.761 TEST_HEADER include/spdk/md5.h 00:05:53.761 TEST_HEADER include/spdk/mmio.h 00:05:53.761 CC app/spdk_dd/spdk_dd.o 00:05:53.761 TEST_HEADER include/spdk/net.h 00:05:53.761 TEST_HEADER include/spdk/nbd.h 00:05:53.761 TEST_HEADER include/spdk/notify.h 00:05:53.761 TEST_HEADER include/spdk/nvme_intel.h 00:05:53.761 TEST_HEADER include/spdk/nvme.h 00:05:53.761 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:53.761 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:53.761 TEST_HEADER include/spdk/nvme_spec.h 00:05:53.761 TEST_HEADER include/spdk/nvme_zns.h 00:05:53.761 CC app/nvmf_tgt/nvmf_main.o 00:05:53.761 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:53.761 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:53.761 TEST_HEADER include/spdk/nvmf.h 00:05:53.761 CC app/iscsi_tgt/iscsi_tgt.o 00:05:53.761 TEST_HEADER include/spdk/nvmf_spec.h 00:05:53.761 TEST_HEADER include/spdk/nvmf_transport.h 00:05:53.761 TEST_HEADER include/spdk/opal.h 00:05:53.761 TEST_HEADER include/spdk/opal_spec.h 00:05:53.761 TEST_HEADER include/spdk/pci_ids.h 00:05:53.761 TEST_HEADER include/spdk/queue.h 00:05:53.761 TEST_HEADER include/spdk/pipe.h 00:05:53.761 TEST_HEADER include/spdk/reduce.h 00:05:53.761 TEST_HEADER include/spdk/scheduler.h 00:05:53.761 TEST_HEADER include/spdk/rpc.h 00:05:53.761 TEST_HEADER include/spdk/scsi.h 00:05:53.761 TEST_HEADER include/spdk/scsi_spec.h 00:05:53.761 TEST_HEADER include/spdk/sock.h 00:05:53.761 TEST_HEADER include/spdk/stdinc.h 00:05:53.761 TEST_HEADER include/spdk/thread.h 00:05:53.761 TEST_HEADER include/spdk/string.h 00:05:53.761 TEST_HEADER include/spdk/trace.h 00:05:53.761 TEST_HEADER include/spdk/trace_parser.h 00:05:53.761 TEST_HEADER include/spdk/tree.h 00:05:53.761 TEST_HEADER include/spdk/ublk.h 00:05:53.761 TEST_HEADER include/spdk/uuid.h 00:05:53.761 TEST_HEADER include/spdk/util.h 00:05:53.761 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:53.761 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:53.761 TEST_HEADER include/spdk/version.h 00:05:53.761 TEST_HEADER include/spdk/vmd.h 00:05:53.761 TEST_HEADER include/spdk/vhost.h 00:05:53.761 TEST_HEADER include/spdk/xor.h 00:05:53.761 CC app/spdk_tgt/spdk_tgt.o 00:05:53.761 TEST_HEADER include/spdk/zipf.h 00:05:53.761 CXX test/cpp_headers/accel_module.o 00:05:53.761 CXX test/cpp_headers/accel.o 00:05:53.761 CXX test/cpp_headers/assert.o 00:05:53.761 CXX test/cpp_headers/barrier.o 00:05:53.761 CXX test/cpp_headers/base64.o 00:05:53.761 CXX test/cpp_headers/bdev.o 00:05:53.761 CXX test/cpp_headers/bdev_zone.o 00:05:53.761 CXX test/cpp_headers/bit_array.o 00:05:53.761 CXX test/cpp_headers/bdev_module.o 00:05:53.761 CXX test/cpp_headers/bit_pool.o 00:05:53.761 CXX test/cpp_headers/blobfs_bdev.o 00:05:53.761 CXX test/cpp_headers/blob.o 00:05:53.761 CXX test/cpp_headers/blob_bdev.o 00:05:53.761 CXX test/cpp_headers/blobfs.o 00:05:53.761 CXX test/cpp_headers/conf.o 00:05:53.761 CXX test/cpp_headers/config.o 00:05:53.761 CXX test/cpp_headers/cpuset.o 00:05:53.761 CXX test/cpp_headers/crc32.o 00:05:53.761 CXX test/cpp_headers/crc64.o 00:05:53.761 CXX test/cpp_headers/dif.o 00:05:53.761 CXX test/cpp_headers/crc16.o 00:05:53.761 CXX test/cpp_headers/endian.o 00:05:53.761 CXX test/cpp_headers/dma.o 00:05:53.761 CXX test/cpp_headers/env.o 00:05:53.761 CXX test/cpp_headers/env_dpdk.o 00:05:53.761 CXX test/cpp_headers/event.o 00:05:53.761 CXX test/cpp_headers/fd_group.o 00:05:53.761 CXX test/cpp_headers/fsdev.o 00:05:53.761 CXX test/cpp_headers/fsdev_module.o 00:05:53.761 CXX test/cpp_headers/fd.o 00:05:53.761 CXX test/cpp_headers/ftl.o 00:05:53.761 CXX test/cpp_headers/file.o 00:05:53.761 CXX test/cpp_headers/fuse_dispatcher.o 00:05:53.761 CXX test/cpp_headers/hexlify.o 00:05:53.761 CXX test/cpp_headers/gpt_spec.o 00:05:53.761 CXX test/cpp_headers/idxd.o 00:05:53.761 CXX test/cpp_headers/idxd_spec.o 00:05:53.761 CXX test/cpp_headers/histogram_data.o 00:05:53.761 CXX test/cpp_headers/ioat.o 00:05:53.761 CXX test/cpp_headers/init.o 00:05:53.761 CXX test/cpp_headers/iscsi_spec.o 00:05:53.761 CXX test/cpp_headers/json.o 00:05:53.761 CXX test/cpp_headers/jsonrpc.o 00:05:53.761 CXX test/cpp_headers/keyring.o 00:05:53.761 CXX test/cpp_headers/ioat_spec.o 00:05:53.761 CXX test/cpp_headers/keyring_module.o 00:05:53.761 CXX test/cpp_headers/log.o 00:05:53.761 CXX test/cpp_headers/likely.o 00:05:53.761 CXX test/cpp_headers/lvol.o 00:05:53.761 CXX test/cpp_headers/mmio.o 00:05:53.761 CXX test/cpp_headers/md5.o 00:05:53.761 CXX test/cpp_headers/memory.o 00:05:53.761 CXX test/cpp_headers/net.o 00:05:53.761 CXX test/cpp_headers/nbd.o 00:05:53.761 CXX test/cpp_headers/nvme.o 00:05:53.761 CXX test/cpp_headers/notify.o 00:05:53.761 CXX test/cpp_headers/nvme_intel.o 00:05:53.761 CXX test/cpp_headers/nvme_ocssd.o 00:05:53.761 CXX test/cpp_headers/nvme_spec.o 00:05:53.761 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:53.761 CXX test/cpp_headers/nvme_zns.o 00:05:53.761 CXX test/cpp_headers/nvmf_cmd.o 00:05:53.761 CXX test/cpp_headers/nvmf.o 00:05:53.761 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:53.761 CXX test/cpp_headers/nvmf_spec.o 00:05:53.761 CXX test/cpp_headers/nvmf_transport.o 00:05:53.761 CXX test/cpp_headers/opal.o 00:05:53.761 CC test/env/pci/pci_ut.o 00:05:53.761 CC test/env/memory/memory_ut.o 00:05:53.761 CC examples/util/zipf/zipf.o 00:05:53.761 CC examples/ioat/verify/verify.o 00:05:53.761 CC test/thread/poller_perf/poller_perf.o 00:05:53.761 CC test/env/vtophys/vtophys.o 00:05:53.761 CC test/app/stub/stub.o 00:05:53.761 CC test/app/histogram_perf/histogram_perf.o 00:05:53.761 CC examples/ioat/perf/perf.o 00:05:53.761 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:53.761 CC app/fio/nvme/fio_plugin.o 00:05:53.761 CC test/app/jsoncat/jsoncat.o 00:05:54.033 CC test/dma/test_dma/test_dma.o 00:05:54.033 CC test/app/bdev_svc/bdev_svc.o 00:05:54.033 LINK spdk_lspci 00:05:54.033 CC app/fio/bdev/fio_plugin.o 00:05:54.033 LINK spdk_trace_record 00:05:54.033 LINK nvmf_tgt 00:05:54.294 CC test/env/mem_callbacks/mem_callbacks.o 00:05:54.294 LINK iscsi_tgt 00:05:54.294 LINK rpc_client_test 00:05:54.294 LINK interrupt_tgt 00:05:54.294 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:54.294 LINK spdk_nvme_discover 00:05:54.294 LINK poller_perf 00:05:54.294 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:54.294 CXX test/cpp_headers/opal_spec.o 00:05:54.294 CXX test/cpp_headers/pci_ids.o 00:05:54.294 CXX test/cpp_headers/pipe.o 00:05:54.294 CXX test/cpp_headers/queue.o 00:05:54.294 LINK stub 00:05:54.294 CXX test/cpp_headers/reduce.o 00:05:54.294 CXX test/cpp_headers/rpc.o 00:05:54.294 CXX test/cpp_headers/scheduler.o 00:05:54.294 CXX test/cpp_headers/scsi.o 00:05:54.294 CXX test/cpp_headers/scsi_spec.o 00:05:54.294 CXX test/cpp_headers/sock.o 00:05:54.294 CXX test/cpp_headers/stdinc.o 00:05:54.294 CXX test/cpp_headers/string.o 00:05:54.294 CXX test/cpp_headers/thread.o 00:05:54.294 CXX test/cpp_headers/trace.o 00:05:54.294 CXX test/cpp_headers/trace_parser.o 00:05:54.294 CXX test/cpp_headers/tree.o 00:05:54.294 CXX test/cpp_headers/ublk.o 00:05:54.553 CXX test/cpp_headers/util.o 00:05:54.553 CXX test/cpp_headers/uuid.o 00:05:54.553 CXX test/cpp_headers/version.o 00:05:54.553 CXX test/cpp_headers/vfio_user_pci.o 00:05:54.553 CXX test/cpp_headers/vfio_user_spec.o 00:05:54.553 CXX test/cpp_headers/vhost.o 00:05:54.553 CXX test/cpp_headers/vmd.o 00:05:54.553 CXX test/cpp_headers/xor.o 00:05:54.553 CXX test/cpp_headers/zipf.o 00:05:54.553 LINK verify 00:05:54.553 LINK spdk_tgt 00:05:54.553 LINK bdev_svc 00:05:54.553 LINK zipf 00:05:54.553 LINK histogram_perf 00:05:54.553 LINK vtophys 00:05:54.554 LINK jsoncat 00:05:54.554 LINK env_dpdk_post_init 00:05:54.554 LINK spdk_dd 00:05:54.554 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:54.554 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:54.554 LINK ioat_perf 00:05:54.554 LINK pci_ut 00:05:54.811 LINK spdk_trace 00:05:54.811 CC test/event/reactor_perf/reactor_perf.o 00:05:54.811 LINK spdk_bdev 00:05:54.811 CC test/event/event_perf/event_perf.o 00:05:54.811 CC test/event/reactor/reactor.o 00:05:54.811 LINK spdk_nvme 00:05:54.811 CC test/event/app_repeat/app_repeat.o 00:05:54.811 LINK nvme_fuzz 00:05:54.811 CC test/event/scheduler/scheduler.o 00:05:54.811 LINK test_dma 00:05:54.811 CC examples/idxd/perf/perf.o 00:05:54.811 CC examples/sock/hello_world/hello_sock.o 00:05:54.811 CC examples/vmd/lsvmd/lsvmd.o 00:05:54.811 CC examples/vmd/led/led.o 00:05:55.068 CC examples/thread/thread/thread_ex.o 00:05:55.068 LINK event_perf 00:05:55.068 LINK reactor_perf 00:05:55.068 LINK reactor 00:05:55.068 LINK spdk_nvme_perf 00:05:55.068 LINK vhost_fuzz 00:05:55.068 LINK app_repeat 00:05:55.068 LINK spdk_nvme_identify 00:05:55.068 LINK mem_callbacks 00:05:55.068 CC app/vhost/vhost.o 00:05:55.068 LINK led 00:05:55.068 LINK lsvmd 00:05:55.068 LINK scheduler 00:05:55.068 LINK spdk_top 00:05:55.068 LINK hello_sock 00:05:55.326 LINK thread 00:05:55.326 LINK idxd_perf 00:05:55.326 LINK vhost 00:05:55.326 CC test/nvme/aer/aer.o 00:05:55.326 CC test/nvme/simple_copy/simple_copy.o 00:05:55.326 CC test/nvme/err_injection/err_injection.o 00:05:55.326 CC test/nvme/reset/reset.o 00:05:55.326 CC test/nvme/e2edp/nvme_dp.o 00:05:55.326 CC test/nvme/reserve/reserve.o 00:05:55.326 CC test/nvme/compliance/nvme_compliance.o 00:05:55.326 CC test/nvme/fused_ordering/fused_ordering.o 00:05:55.326 CC test/nvme/sgl/sgl.o 00:05:55.326 CC test/nvme/cuse/cuse.o 00:05:55.326 CC test/nvme/overhead/overhead.o 00:05:55.326 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:55.326 CC test/nvme/startup/startup.o 00:05:55.326 CC test/nvme/fdp/fdp.o 00:05:55.326 CC test/nvme/connect_stress/connect_stress.o 00:05:55.326 CC test/nvme/boot_partition/boot_partition.o 00:05:55.326 CC test/accel/dif/dif.o 00:05:55.326 LINK memory_ut 00:05:55.326 CC test/blobfs/mkfs/mkfs.o 00:05:55.629 CC test/lvol/esnap/esnap.o 00:05:55.629 CC examples/nvme/hotplug/hotplug.o 00:05:55.629 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:55.629 CC examples/nvme/abort/abort.o 00:05:55.629 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:55.629 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:55.629 LINK err_injection 00:05:55.629 CC examples/nvme/reconnect/reconnect.o 00:05:55.629 CC examples/nvme/hello_world/hello_world.o 00:05:55.629 CC examples/nvme/arbitration/arbitration.o 00:05:55.629 LINK startup 00:05:55.629 LINK boot_partition 00:05:55.629 LINK reserve 00:05:55.629 LINK doorbell_aers 00:05:55.629 LINK simple_copy 00:05:55.629 LINK connect_stress 00:05:55.629 LINK fused_ordering 00:05:55.629 LINK reset 00:05:55.629 LINK aer 00:05:55.629 LINK mkfs 00:05:55.629 LINK sgl 00:05:55.629 LINK nvme_dp 00:05:55.629 CC examples/accel/perf/accel_perf.o 00:05:55.629 LINK overhead 00:05:55.629 CC examples/blob/hello_world/hello_blob.o 00:05:55.629 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:55.629 LINK nvme_compliance 00:05:55.886 CC examples/blob/cli/blobcli.o 00:05:55.886 LINK fdp 00:05:55.886 LINK cmb_copy 00:05:55.886 LINK pmr_persistence 00:05:55.886 LINK hotplug 00:05:55.886 LINK hello_world 00:05:55.886 LINK reconnect 00:05:55.886 LINK arbitration 00:05:55.886 LINK abort 00:05:55.886 LINK iscsi_fuzz 00:05:56.144 LINK hello_blob 00:05:56.144 LINK dif 00:05:56.144 LINK hello_fsdev 00:05:56.144 LINK nvme_manage 00:05:56.144 LINK accel_perf 00:05:56.144 LINK blobcli 00:05:56.401 LINK cuse 00:05:56.659 CC test/bdev/bdevio/bdevio.o 00:05:56.659 CC examples/bdev/hello_world/hello_bdev.o 00:05:56.659 CC examples/bdev/bdevperf/bdevperf.o 00:05:56.916 LINK hello_bdev 00:05:56.916 LINK bdevio 00:05:57.174 LINK bdevperf 00:05:57.738 CC examples/nvmf/nvmf/nvmf.o 00:05:57.995 LINK nvmf 00:05:59.371 LINK esnap 00:05:59.371 00:05:59.371 real 0m55.691s 00:05:59.371 user 8m15.843s 00:05:59.371 sys 3m49.291s 00:05:59.371 10:16:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:59.371 10:16:36 make -- common/autotest_common.sh@10 -- $ set +x 00:05:59.371 ************************************ 00:05:59.371 END TEST make 00:05:59.371 ************************************ 00:05:59.371 10:16:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:59.371 10:16:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:59.371 10:16:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:59.371 10:16:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.371 10:16:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:59.371 10:16:37 -- pm/common@44 -- $ pid=2384901 00:05:59.371 10:16:37 -- pm/common@50 -- $ kill -TERM 2384901 00:05:59.371 10:16:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.371 10:16:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:59.371 10:16:37 -- pm/common@44 -- $ pid=2384902 00:05:59.371 10:16:37 -- pm/common@50 -- $ kill -TERM 2384902 00:05:59.371 10:16:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.371 10:16:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:59.371 10:16:37 -- pm/common@44 -- $ pid=2384904 00:05:59.371 10:16:37 -- pm/common@50 -- $ kill -TERM 2384904 00:05:59.371 10:16:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.371 10:16:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:59.371 10:16:37 -- pm/common@44 -- $ pid=2384931 00:05:59.371 10:16:37 -- pm/common@50 -- $ sudo -E kill -TERM 2384931 00:05:59.371 10:16:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:59.371 10:16:37 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:59.631 10:16:37 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.631 10:16:37 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.631 10:16:37 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.631 10:16:37 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.631 10:16:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.631 10:16:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.631 10:16:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.631 10:16:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.631 10:16:37 -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.631 10:16:37 -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.631 10:16:37 -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.631 10:16:37 -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.631 10:16:37 -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.631 10:16:37 -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.631 10:16:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.631 10:16:37 -- scripts/common.sh@344 -- # case "$op" in 00:05:59.631 10:16:37 -- scripts/common.sh@345 -- # : 1 00:05:59.631 10:16:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.631 10:16:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.631 10:16:37 -- scripts/common.sh@365 -- # decimal 1 00:05:59.631 10:16:37 -- scripts/common.sh@353 -- # local d=1 00:05:59.631 10:16:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.631 10:16:37 -- scripts/common.sh@355 -- # echo 1 00:05:59.631 10:16:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.631 10:16:37 -- scripts/common.sh@366 -- # decimal 2 00:05:59.631 10:16:37 -- scripts/common.sh@353 -- # local d=2 00:05:59.631 10:16:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.631 10:16:37 -- scripts/common.sh@355 -- # echo 2 00:05:59.631 10:16:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.631 10:16:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.631 10:16:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.631 10:16:37 -- scripts/common.sh@368 -- # return 0 00:05:59.631 10:16:37 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.631 10:16:37 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.631 --rc genhtml_branch_coverage=1 00:05:59.631 --rc genhtml_function_coverage=1 00:05:59.631 --rc genhtml_legend=1 00:05:59.631 --rc geninfo_all_blocks=1 00:05:59.631 --rc geninfo_unexecuted_blocks=1 00:05:59.631 00:05:59.631 ' 00:05:59.631 10:16:37 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.631 --rc genhtml_branch_coverage=1 00:05:59.631 --rc genhtml_function_coverage=1 00:05:59.631 --rc genhtml_legend=1 00:05:59.631 --rc geninfo_all_blocks=1 00:05:59.631 --rc geninfo_unexecuted_blocks=1 00:05:59.631 00:05:59.631 ' 00:05:59.631 10:16:37 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.631 --rc genhtml_branch_coverage=1 00:05:59.631 --rc genhtml_function_coverage=1 00:05:59.631 --rc genhtml_legend=1 00:05:59.631 --rc geninfo_all_blocks=1 00:05:59.631 --rc geninfo_unexecuted_blocks=1 00:05:59.631 00:05:59.631 ' 00:05:59.631 10:16:37 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.631 --rc genhtml_branch_coverage=1 00:05:59.631 --rc genhtml_function_coverage=1 00:05:59.631 --rc genhtml_legend=1 00:05:59.631 --rc geninfo_all_blocks=1 00:05:59.631 --rc geninfo_unexecuted_blocks=1 00:05:59.631 00:05:59.631 ' 00:05:59.631 10:16:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.631 10:16:37 -- nvmf/common.sh@7 -- # uname -s 00:05:59.631 10:16:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.631 10:16:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.632 10:16:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.632 10:16:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.632 10:16:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.632 10:16:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.632 10:16:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.632 10:16:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.632 10:16:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.632 10:16:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.632 10:16:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:59.632 10:16:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:59.632 10:16:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.632 10:16:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.632 10:16:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.632 10:16:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.632 10:16:37 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.632 10:16:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.632 10:16:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.632 10:16:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.632 10:16:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.632 10:16:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.632 10:16:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.632 10:16:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.632 10:16:37 -- paths/export.sh@5 -- # export PATH 00:05:59.632 10:16:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.632 10:16:37 -- nvmf/common.sh@51 -- # : 0 00:05:59.632 10:16:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.632 10:16:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.632 10:16:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.632 10:16:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.632 10:16:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.632 10:16:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.632 10:16:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.632 10:16:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.632 10:16:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.632 10:16:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:59.632 10:16:37 -- spdk/autotest.sh@32 -- # uname -s 00:05:59.632 10:16:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:59.632 10:16:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:59.632 10:16:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:59.632 10:16:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:59.632 10:16:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:59.632 10:16:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:59.632 10:16:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:59.632 10:16:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:59.632 10:16:37 -- spdk/autotest.sh@48 -- # udevadm_pid=2447368 00:05:59.632 10:16:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:59.632 10:16:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:59.632 10:16:37 -- pm/common@17 -- # local monitor 00:05:59.632 10:16:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.632 10:16:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.632 10:16:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.632 10:16:37 -- pm/common@21 -- # date +%s 00:05:59.632 10:16:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.632 10:16:37 -- pm/common@21 -- # date +%s 00:05:59.632 10:16:37 -- pm/common@25 -- # sleep 1 00:05:59.632 10:16:37 -- pm/common@21 -- # date +%s 00:05:59.632 10:16:37 -- pm/common@21 -- # date +%s 00:05:59.632 10:16:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735797 00:05:59.632 10:16:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735797 00:05:59.632 10:16:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735797 00:05:59.632 10:16:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735797 00:05:59.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735797_collect-cpu-load.pm.log 00:05:59.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735797_collect-vmstat.pm.log 00:05:59.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735797_collect-cpu-temp.pm.log 00:05:59.632 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735797_collect-bmc-pm.bmc.pm.log 00:06:00.596 10:16:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:00.596 10:16:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:00.596 10:16:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.596 10:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 10:16:38 -- spdk/autotest.sh@59 -- # create_test_list 00:06:00.596 10:16:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:00.596 10:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 10:16:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:00.855 10:16:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.856 10:16:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.856 10:16:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:00.856 10:16:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.856 10:16:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:00.856 10:16:38 -- common/autotest_common.sh@1457 -- # uname 00:06:00.856 10:16:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:00.856 10:16:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:00.856 10:16:38 -- common/autotest_common.sh@1477 -- # uname 00:06:00.856 10:16:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:00.856 10:16:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:00.856 10:16:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:00.856 lcov: LCOV version 1.15 00:06:00.856 10:16:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:13.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:13.062 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:27.947 10:17:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:27.947 10:17:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.947 10:17:03 -- common/autotest_common.sh@10 -- # set +x 00:06:27.947 10:17:03 -- spdk/autotest.sh@78 -- # rm -f 00:06:27.947 10:17:03 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:28.514 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:06:28.514 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:28.514 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:28.772 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:28.772 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:28.772 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:28.772 10:17:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:28.772 10:17:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:28.772 10:17:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:28.772 10:17:06 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:28.772 10:17:06 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:28.772 10:17:06 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:28.772 10:17:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:28.772 10:17:06 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:06:28.772 10:17:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.772 10:17:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:28.772 10:17:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:28.772 10:17:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:28.772 10:17:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.772 10:17:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:28.772 10:17:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:28.772 10:17:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:28.772 10:17:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:28.772 10:17:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:28.772 10:17:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:28.772 No valid GPT data, bailing 00:06:28.772 10:17:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:28.772 10:17:06 -- scripts/common.sh@394 -- # pt= 00:06:28.772 10:17:06 -- scripts/common.sh@395 -- # return 1 00:06:28.772 10:17:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:28.772 1+0 records in 00:06:28.772 1+0 records out 00:06:28.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00187457 s, 559 MB/s 00:06:28.772 10:17:06 -- spdk/autotest.sh@105 -- # sync 00:06:28.772 10:17:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:28.772 10:17:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:28.772 10:17:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:35.335 10:17:11 -- spdk/autotest.sh@111 -- # uname -s 00:06:35.335 10:17:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:35.335 10:17:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:35.335 10:17:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:37.241 Hugepages 00:06:37.241 node hugesize free / total 00:06:37.241 node0 1048576kB 0 / 0 00:06:37.241 node0 2048kB 0 / 0 00:06:37.241 node1 1048576kB 0 / 0 00:06:37.241 node1 2048kB 0 / 0 00:06:37.241 00:06:37.241 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:37.241 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:37.241 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:37.241 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:37.241 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:37.241 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:37.241 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:37.241 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:37.242 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:37.242 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:37.242 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:37.242 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:37.242 10:17:14 -- spdk/autotest.sh@117 -- # uname -s 00:06:37.242 10:17:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:37.242 10:17:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:37.242 10:17:14 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:40.527 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:40.527 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:41.514 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:41.774 10:17:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:42.711 10:17:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:42.711 10:17:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:42.711 10:17:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:42.711 10:17:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:42.711 10:17:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:42.711 10:17:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:42.711 10:17:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:42.711 10:17:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:42.711 10:17:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:42.711 10:17:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:42.711 10:17:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:42.711 10:17:20 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:46.002 Waiting for block devices as requested 00:06:46.002 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:46.002 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:46.002 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:46.261 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:46.261 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:46.261 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:46.520 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:46.520 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:46.520 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:46.778 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:46.778 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:46.778 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:46.778 10:17:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:46.778 10:17:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:46.778 10:17:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:46.778 10:17:24 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:47.037 10:17:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:47.037 10:17:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:47.037 10:17:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:47.037 10:17:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:47.037 10:17:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:06:47.037 10:17:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:47.037 10:17:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:47.037 10:17:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:47.037 10:17:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:47.037 10:17:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:47.037 10:17:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:47.037 10:17:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:47.037 10:17:24 -- common/autotest_common.sh@1543 -- # continue 00:06:47.037 10:17:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:47.037 10:17:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.037 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.037 10:17:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:47.037 10:17:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.037 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.037 10:17:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:50.329 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:50.329 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:51.267 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:51.526 10:17:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:51.526 10:17:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.526 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:51.526 10:17:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:51.526 10:17:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:51.526 10:17:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:51.526 10:17:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:51.526 10:17:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:51.526 10:17:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:51.526 10:17:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:51.526 10:17:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:51.526 10:17:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:51.526 10:17:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:51.526 10:17:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.526 10:17:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:51.526 10:17:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:51.526 10:17:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:51.526 10:17:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:51.526 10:17:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:51.526 10:17:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:51.526 10:17:29 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:51.526 10:17:29 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:51.526 10:17:29 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:51.526 10:17:29 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:51.526 10:17:29 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:06:51.526 10:17:29 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:06:51.526 10:17:29 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2461589 00:06:51.526 10:17:29 -- common/autotest_common.sh@1585 -- # waitforlisten 2461589 00:06:51.526 10:17:29 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.526 10:17:29 -- common/autotest_common.sh@835 -- # '[' -z 2461589 ']' 00:06:51.526 10:17:29 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.526 10:17:29 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.526 10:17:29 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.526 10:17:29 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.526 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:51.526 [2024-12-09 10:17:29.225715] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:06:51.526 [2024-12-09 10:17:29.225762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461589 ] 00:06:51.786 [2024-12-09 10:17:29.299269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.786 [2024-12-09 10:17:29.339460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.044 10:17:29 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.044 10:17:29 -- common/autotest_common.sh@868 -- # return 0 00:06:52.044 10:17:29 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:52.044 10:17:29 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:52.044 10:17:29 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:55.329 nvme0n1 00:06:55.329 10:17:32 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:55.329 [2024-12-09 10:17:32.722514] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:55.329 request: 00:06:55.329 { 00:06:55.329 "nvme_ctrlr_name": "nvme0", 00:06:55.329 "password": "test", 00:06:55.329 "method": "bdev_nvme_opal_revert", 00:06:55.329 "req_id": 1 00:06:55.329 } 00:06:55.329 Got JSON-RPC error response 00:06:55.329 response: 00:06:55.329 { 00:06:55.329 "code": -32602, 00:06:55.329 "message": "Invalid parameters" 00:06:55.329 } 00:06:55.329 10:17:32 -- common/autotest_common.sh@1591 -- # true 00:06:55.329 10:17:32 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:55.329 10:17:32 -- common/autotest_common.sh@1595 -- # killprocess 2461589 00:06:55.329 10:17:32 -- common/autotest_common.sh@954 -- # '[' -z 2461589 ']' 00:06:55.329 10:17:32 -- common/autotest_common.sh@958 -- # kill -0 2461589 00:06:55.329 10:17:32 -- common/autotest_common.sh@959 -- # uname 00:06:55.329 10:17:32 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.329 10:17:32 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461589 00:06:55.329 10:17:32 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.329 10:17:32 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.329 10:17:32 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461589' 00:06:55.329 killing process with pid 2461589 00:06:55.329 10:17:32 -- common/autotest_common.sh@973 -- # kill 2461589 00:06:55.329 10:17:32 -- common/autotest_common.sh@978 -- # wait 2461589 00:06:57.227 10:17:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:57.227 10:17:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:57.227 10:17:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:57.227 10:17:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:57.227 10:17:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:57.227 10:17:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.227 10:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.227 10:17:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:57.227 10:17:34 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:57.227 10:17:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.227 10:17:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.227 10:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.227 ************************************ 00:06:57.227 START TEST env 00:06:57.227 ************************************ 00:06:57.227 10:17:34 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:57.485 * Looking for test storage... 00:06:57.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.485 10:17:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.485 10:17:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.485 10:17:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.485 10:17:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.485 10:17:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.485 10:17:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.485 10:17:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.485 10:17:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.485 10:17:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.485 10:17:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.485 10:17:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.485 10:17:35 env -- scripts/common.sh@344 -- # case "$op" in 00:06:57.485 10:17:35 env -- scripts/common.sh@345 -- # : 1 00:06:57.485 10:17:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.485 10:17:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.485 10:17:35 env -- scripts/common.sh@365 -- # decimal 1 00:06:57.485 10:17:35 env -- scripts/common.sh@353 -- # local d=1 00:06:57.485 10:17:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.485 10:17:35 env -- scripts/common.sh@355 -- # echo 1 00:06:57.485 10:17:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.485 10:17:35 env -- scripts/common.sh@366 -- # decimal 2 00:06:57.485 10:17:35 env -- scripts/common.sh@353 -- # local d=2 00:06:57.485 10:17:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.485 10:17:35 env -- scripts/common.sh@355 -- # echo 2 00:06:57.485 10:17:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.485 10:17:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.485 10:17:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.485 10:17:35 env -- scripts/common.sh@368 -- # return 0 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.485 --rc genhtml_branch_coverage=1 00:06:57.485 --rc genhtml_function_coverage=1 00:06:57.485 --rc genhtml_legend=1 00:06:57.485 --rc geninfo_all_blocks=1 00:06:57.485 --rc geninfo_unexecuted_blocks=1 00:06:57.485 00:06:57.485 ' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.485 --rc genhtml_branch_coverage=1 00:06:57.485 --rc genhtml_function_coverage=1 00:06:57.485 --rc genhtml_legend=1 00:06:57.485 --rc geninfo_all_blocks=1 00:06:57.485 --rc geninfo_unexecuted_blocks=1 00:06:57.485 00:06:57.485 ' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.485 --rc genhtml_branch_coverage=1 00:06:57.485 --rc genhtml_function_coverage=1 00:06:57.485 --rc genhtml_legend=1 00:06:57.485 --rc geninfo_all_blocks=1 00:06:57.485 --rc geninfo_unexecuted_blocks=1 00:06:57.485 00:06:57.485 ' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.485 --rc genhtml_branch_coverage=1 00:06:57.485 --rc genhtml_function_coverage=1 00:06:57.485 --rc genhtml_legend=1 00:06:57.485 --rc geninfo_all_blocks=1 00:06:57.485 --rc geninfo_unexecuted_blocks=1 00:06:57.485 00:06:57.485 ' 00:06:57.485 10:17:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.485 10:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.485 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.485 ************************************ 00:06:57.485 START TEST env_memory 00:06:57.485 ************************************ 00:06:57.485 10:17:35 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.485 00:06:57.485 00:06:57.485 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.485 http://cunit.sourceforge.net/ 00:06:57.485 00:06:57.485 00:06:57.485 Suite: memory 00:06:57.485 Test: alloc and free memory map ...[2024-12-09 10:17:35.186851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:57.485 passed 00:06:57.485 Test: mem map translation ...[2024-12-09 10:17:35.205080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:57.485 [2024-12-09 10:17:35.205112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:57.485 [2024-12-09 10:17:35.205146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:57.485 [2024-12-09 10:17:35.205152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:57.743 passed 00:06:57.744 Test: mem map registration ...[2024-12-09 10:17:35.241539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:57.744 [2024-12-09 10:17:35.241563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:57.744 passed 00:06:57.744 Test: mem map adjacent registrations ...passed 00:06:57.744 00:06:57.744 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.744 suites 1 1 n/a 0 0 00:06:57.744 tests 4 4 4 0 0 00:06:57.744 asserts 152 152 152 0 n/a 00:06:57.744 00:06:57.744 Elapsed time = 0.132 seconds 00:06:57.744 00:06:57.744 real 0m0.144s 00:06:57.744 user 0m0.134s 00:06:57.744 sys 0m0.010s 00:06:57.744 10:17:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.744 10:17:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:57.744 ************************************ 00:06:57.744 END TEST env_memory 00:06:57.744 ************************************ 00:06:57.744 10:17:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.744 10:17:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.744 10:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.744 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.744 ************************************ 00:06:57.744 START TEST env_vtophys 00:06:57.744 ************************************ 00:06:57.744 10:17:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.744 EAL: lib.eal log level changed from notice to debug 00:06:57.744 EAL: Detected lcore 0 as core 0 on socket 0 00:06:57.744 EAL: Detected lcore 1 as core 1 on socket 0 00:06:57.744 EAL: Detected lcore 2 as core 2 on socket 0 00:06:57.744 EAL: Detected lcore 3 as core 3 on socket 0 00:06:57.744 EAL: Detected lcore 4 as core 4 on socket 0 00:06:57.744 EAL: Detected lcore 5 as core 5 on socket 0 00:06:57.744 EAL: Detected lcore 6 as core 6 on socket 0 00:06:57.744 EAL: Detected lcore 7 as core 8 on socket 0 00:06:57.744 EAL: Detected lcore 8 as core 9 on socket 0 00:06:57.744 EAL: Detected lcore 9 as core 10 on socket 0 00:06:57.744 EAL: Detected lcore 10 as core 11 on socket 0 00:06:57.744 EAL: Detected lcore 11 as core 12 on socket 0 00:06:57.744 EAL: Detected lcore 12 as core 13 on socket 0 00:06:57.744 EAL: Detected lcore 13 as core 16 on socket 0 00:06:57.744 EAL: Detected lcore 14 as core 17 on socket 0 00:06:57.744 EAL: Detected lcore 15 as core 18 on socket 0 00:06:57.744 EAL: Detected lcore 16 as core 19 on socket 0 00:06:57.744 EAL: Detected lcore 17 as core 20 on socket 0 00:06:57.744 EAL: Detected lcore 18 as core 21 on socket 0 00:06:57.744 EAL: Detected lcore 19 as core 25 on socket 0 00:06:57.744 EAL: Detected lcore 20 as core 26 on socket 0 00:06:57.744 EAL: Detected lcore 21 as core 27 on socket 0 00:06:57.744 EAL: Detected lcore 22 as core 28 on socket 0 00:06:57.744 EAL: Detected lcore 23 as core 29 on socket 0 00:06:57.744 EAL: Detected lcore 24 as core 0 on socket 1 00:06:57.744 EAL: Detected lcore 25 as core 1 on socket 1 00:06:57.744 EAL: Detected lcore 26 as core 2 on socket 1 00:06:57.744 EAL: Detected lcore 27 as core 3 on socket 1 00:06:57.744 EAL: Detected lcore 28 as core 4 on socket 1 00:06:57.744 EAL: Detected lcore 29 as core 5 on socket 1 00:06:57.744 EAL: Detected lcore 30 as core 6 on socket 1 00:06:57.744 EAL: Detected lcore 31 as core 8 on socket 1 00:06:57.744 EAL: Detected lcore 32 as core 10 on socket 1 00:06:57.744 EAL: Detected lcore 33 as core 11 on socket 1 00:06:57.744 EAL: Detected lcore 34 as core 12 on socket 1 00:06:57.744 EAL: Detected lcore 35 as core 13 on socket 1 00:06:57.744 EAL: Detected lcore 36 as core 16 on socket 1 00:06:57.744 EAL: Detected lcore 37 as core 17 on socket 1 00:06:57.744 EAL: Detected lcore 38 as core 18 on socket 1 00:06:57.744 EAL: Detected lcore 39 as core 19 on socket 1 00:06:57.744 EAL: Detected lcore 40 as core 20 on socket 1 00:06:57.744 EAL: Detected lcore 41 as core 21 on socket 1 00:06:57.744 EAL: Detected lcore 42 as core 24 on socket 1 00:06:57.744 EAL: Detected lcore 43 as core 25 on socket 1 00:06:57.744 EAL: Detected lcore 44 as core 26 on socket 1 00:06:57.744 EAL: Detected lcore 45 as core 27 on socket 1 00:06:57.744 EAL: Detected lcore 46 as core 28 on socket 1 00:06:57.744 EAL: Detected lcore 47 as core 29 on socket 1 00:06:57.744 EAL: Detected lcore 48 as core 0 on socket 0 00:06:57.744 EAL: Detected lcore 49 as core 1 on socket 0 00:06:57.744 EAL: Detected lcore 50 as core 2 on socket 0 00:06:57.744 EAL: Detected lcore 51 as core 3 on socket 0 00:06:57.744 EAL: Detected lcore 52 as core 4 on socket 0 00:06:57.744 EAL: Detected lcore 53 as core 5 on socket 0 00:06:57.744 EAL: Detected lcore 54 as core 6 on socket 0 00:06:57.744 EAL: Detected lcore 55 as core 8 on socket 0 00:06:57.744 EAL: Detected lcore 56 as core 9 on socket 0 00:06:57.744 EAL: Detected lcore 57 as core 10 on socket 0 00:06:57.744 EAL: Detected lcore 58 as core 11 on socket 0 00:06:57.744 EAL: Detected lcore 59 as core 12 on socket 0 00:06:57.744 EAL: Detected lcore 60 as core 13 on socket 0 00:06:57.744 EAL: Detected lcore 61 as core 16 on socket 0 00:06:57.744 EAL: Detected lcore 62 as core 17 on socket 0 00:06:57.744 EAL: Detected lcore 63 as core 18 on socket 0 00:06:57.744 EAL: Detected lcore 64 as core 19 on socket 0 00:06:57.744 EAL: Detected lcore 65 as core 20 on socket 0 00:06:57.744 EAL: Detected lcore 66 as core 21 on socket 0 00:06:57.744 EAL: Detected lcore 67 as core 25 on socket 0 00:06:57.744 EAL: Detected lcore 68 as core 26 on socket 0 00:06:57.744 EAL: Detected lcore 69 as core 27 on socket 0 00:06:57.744 EAL: Detected lcore 70 as core 28 on socket 0 00:06:57.744 EAL: Detected lcore 71 as core 29 on socket 0 00:06:57.744 EAL: Detected lcore 72 as core 0 on socket 1 00:06:57.744 EAL: Detected lcore 73 as core 1 on socket 1 00:06:57.744 EAL: Detected lcore 74 as core 2 on socket 1 00:06:57.744 EAL: Detected lcore 75 as core 3 on socket 1 00:06:57.744 EAL: Detected lcore 76 as core 4 on socket 1 00:06:57.744 EAL: Detected lcore 77 as core 5 on socket 1 00:06:57.744 EAL: Detected lcore 78 as core 6 on socket 1 00:06:57.744 EAL: Detected lcore 79 as core 8 on socket 1 00:06:57.744 EAL: Detected lcore 80 as core 10 on socket 1 00:06:57.744 EAL: Detected lcore 81 as core 11 on socket 1 00:06:57.744 EAL: Detected lcore 82 as core 12 on socket 1 00:06:57.744 EAL: Detected lcore 83 as core 13 on socket 1 00:06:57.744 EAL: Detected lcore 84 as core 16 on socket 1 00:06:57.744 EAL: Detected lcore 85 as core 17 on socket 1 00:06:57.744 EAL: Detected lcore 86 as core 18 on socket 1 00:06:57.744 EAL: Detected lcore 87 as core 19 on socket 1 00:06:57.744 EAL: Detected lcore 88 as core 20 on socket 1 00:06:57.744 EAL: Detected lcore 89 as core 21 on socket 1 00:06:57.744 EAL: Detected lcore 90 as core 24 on socket 1 00:06:57.744 EAL: Detected lcore 91 as core 25 on socket 1 00:06:57.744 EAL: Detected lcore 92 as core 26 on socket 1 00:06:57.744 EAL: Detected lcore 93 as core 27 on socket 1 00:06:57.744 EAL: Detected lcore 94 as core 28 on socket 1 00:06:57.744 EAL: Detected lcore 95 as core 29 on socket 1 00:06:57.744 EAL: Maximum logical cores by configuration: 128 00:06:57.744 EAL: Detected CPU lcores: 96 00:06:57.744 EAL: Detected NUMA nodes: 2 00:06:57.744 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:57.744 EAL: Detected shared linkage of DPDK 00:06:57.744 EAL: No shared files mode enabled, IPC will be disabled 00:06:57.744 EAL: Bus pci wants IOVA as 'DC' 00:06:57.744 EAL: Buses did not request a specific IOVA mode. 00:06:57.744 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:57.744 EAL: Selected IOVA mode 'VA' 00:06:57.744 EAL: Probing VFIO support... 00:06:57.744 EAL: IOMMU type 1 (Type 1) is supported 00:06:57.744 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:57.744 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:57.744 EAL: VFIO support initialized 00:06:57.744 EAL: Ask a virtual area of 0x2e000 bytes 00:06:57.744 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:57.744 EAL: Setting up physically contiguous memory... 00:06:57.744 EAL: Setting maximum number of open files to 524288 00:06:57.744 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:57.744 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:57.744 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:57.744 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:57.744 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.744 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:57.744 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.744 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.744 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:57.744 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:57.744 EAL: Hugepages will be freed exactly as allocated. 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: TSC frequency is ~2100000 KHz 00:06:57.744 EAL: Main lcore 0 is ready (tid=7fd292ba9a00;cpuset=[0]) 00:06:57.744 EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.744 EAL: Restoring previous memory policy: 0 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was expanded by 2MB 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:57.744 EAL: Mem event callback 'spdk:(nil)' registered 00:06:57.744 00:06:57.744 00:06:57.744 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.744 http://cunit.sourceforge.net/ 00:06:57.744 00:06:57.744 00:06:57.744 Suite: components_suite 00:06:57.744 Test: vtophys_malloc_test ...passed 00:06:57.744 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.744 EAL: Restoring previous memory policy: 4 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was expanded by 4MB 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was shrunk by 4MB 00:06:57.744 EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.744 EAL: Restoring previous memory policy: 4 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was expanded by 6MB 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was shrunk by 6MB 00:06:57.744 EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.744 EAL: Restoring previous memory policy: 4 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was expanded by 10MB 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was shrunk by 10MB 00:06:57.744 EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.744 EAL: Restoring previous memory policy: 4 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was expanded by 18MB 00:06:57.744 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.744 EAL: request: mp_malloc_sync 00:06:57.744 EAL: No shared files mode enabled, IPC is disabled 00:06:57.744 EAL: Heap on socket 0 was shrunk by 18MB 00:06:57.744 EAL: Trying to obtain current memory policy. 00:06:57.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.001 EAL: Restoring previous memory policy: 4 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was expanded by 34MB 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was shrunk by 34MB 00:06:58.001 EAL: Trying to obtain current memory policy. 00:06:58.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.001 EAL: Restoring previous memory policy: 4 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was expanded by 66MB 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was shrunk by 66MB 00:06:58.001 EAL: Trying to obtain current memory policy. 00:06:58.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.001 EAL: Restoring previous memory policy: 4 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was expanded by 130MB 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was shrunk by 130MB 00:06:58.001 EAL: Trying to obtain current memory policy. 00:06:58.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.001 EAL: Restoring previous memory policy: 4 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was expanded by 258MB 00:06:58.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.001 EAL: request: mp_malloc_sync 00:06:58.001 EAL: No shared files mode enabled, IPC is disabled 00:06:58.001 EAL: Heap on socket 0 was shrunk by 258MB 00:06:58.001 EAL: Trying to obtain current memory policy. 00:06:58.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.259 EAL: Restoring previous memory policy: 4 00:06:58.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.259 EAL: request: mp_malloc_sync 00:06:58.259 EAL: No shared files mode enabled, IPC is disabled 00:06:58.259 EAL: Heap on socket 0 was expanded by 514MB 00:06:58.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.259 EAL: request: mp_malloc_sync 00:06:58.259 EAL: No shared files mode enabled, IPC is disabled 00:06:58.259 EAL: Heap on socket 0 was shrunk by 514MB 00:06:58.259 EAL: Trying to obtain current memory policy. 00:06:58.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.517 EAL: Restoring previous memory policy: 4 00:06:58.517 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.517 EAL: request: mp_malloc_sync 00:06:58.517 EAL: No shared files mode enabled, IPC is disabled 00:06:58.517 EAL: Heap on socket 0 was expanded by 1026MB 00:06:58.775 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.775 EAL: request: mp_malloc_sync 00:06:58.775 EAL: No shared files mode enabled, IPC is disabled 00:06:58.775 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:58.775 passed 00:06:58.775 00:06:58.775 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.775 suites 1 1 n/a 0 0 00:06:58.775 tests 2 2 2 0 0 00:06:58.775 asserts 497 497 497 0 n/a 00:06:58.775 00:06:58.775 Elapsed time = 0.963 seconds 00:06:58.775 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.775 EAL: request: mp_malloc_sync 00:06:58.775 EAL: No shared files mode enabled, IPC is disabled 00:06:58.775 EAL: Heap on socket 0 was shrunk by 2MB 00:06:58.775 EAL: No shared files mode enabled, IPC is disabled 00:06:58.775 EAL: No shared files mode enabled, IPC is disabled 00:06:58.775 EAL: No shared files mode enabled, IPC is disabled 00:06:58.775 00:06:58.775 real 0m1.093s 00:06:58.775 user 0m0.639s 00:06:58.775 sys 0m0.425s 00:06:58.775 10:17:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.775 10:17:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:58.775 ************************************ 00:06:58.775 END TEST env_vtophys 00:06:58.775 ************************************ 00:06:58.775 10:17:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:58.775 10:17:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.775 10:17:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.775 10:17:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 START TEST env_pci 00:06:59.033 ************************************ 00:06:59.033 10:17:36 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:59.033 00:06:59.033 00:06:59.033 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.033 http://cunit.sourceforge.net/ 00:06:59.033 00:06:59.033 00:06:59.033 Suite: pci 00:06:59.033 Test: pci_hook ...[2024-12-09 10:17:36.540701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2462907 has claimed it 00:06:59.033 EAL: Cannot find device (10000:00:01.0) 00:06:59.033 EAL: Failed to attach device on primary process 00:06:59.033 passed 00:06:59.033 00:06:59.033 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.033 suites 1 1 n/a 0 0 00:06:59.033 tests 1 1 1 0 0 00:06:59.033 asserts 25 25 25 0 n/a 00:06:59.033 00:06:59.033 Elapsed time = 0.028 seconds 00:06:59.033 00:06:59.033 real 0m0.047s 00:06:59.033 user 0m0.014s 00:06:59.033 sys 0m0.033s 00:06:59.033 10:17:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.033 10:17:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 END TEST env_pci 00:06:59.033 ************************************ 00:06:59.033 10:17:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:59.033 10:17:36 env -- env/env.sh@15 -- # uname 00:06:59.033 10:17:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:59.033 10:17:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:59.033 10:17:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:59.033 10:17:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:59.033 10:17:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.033 10:17:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 START TEST env_dpdk_post_init 00:06:59.033 ************************************ 00:06:59.033 10:17:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:59.033 EAL: Detected CPU lcores: 96 00:06:59.033 EAL: Detected NUMA nodes: 2 00:06:59.033 EAL: Detected shared linkage of DPDK 00:06:59.033 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:59.033 EAL: Selected IOVA mode 'VA' 00:06:59.033 EAL: VFIO support initialized 00:06:59.033 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:59.292 EAL: Using IOMMU type 1 (Type 1) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:59.292 EAL: Ignore mapping IO port bar(1) 00:06:59.292 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:00.244 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:00.244 EAL: Ignore mapping IO port bar(1) 00:07:00.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:03.527 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:07:03.527 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:07:04.093 Starting DPDK initialization... 00:07:04.093 Starting SPDK post initialization... 00:07:04.093 SPDK NVMe probe 00:07:04.093 Attaching to 0000:5e:00.0 00:07:04.093 Attached to 0000:5e:00.0 00:07:04.093 Cleaning up... 00:07:04.093 00:07:04.093 real 0m4.941s 00:07:04.093 user 0m3.503s 00:07:04.093 sys 0m0.502s 00:07:04.093 10:17:41 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.093 10:17:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 ************************************ 00:07:04.093 END TEST env_dpdk_post_init 00:07:04.093 ************************************ 00:07:04.093 10:17:41 env -- env/env.sh@26 -- # uname 00:07:04.093 10:17:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:04.093 10:17:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.093 10:17:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.093 10:17:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.093 10:17:41 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 ************************************ 00:07:04.093 START TEST env_mem_callbacks 00:07:04.093 ************************************ 00:07:04.093 10:17:41 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.093 EAL: Detected CPU lcores: 96 00:07:04.093 EAL: Detected NUMA nodes: 2 00:07:04.093 EAL: Detected shared linkage of DPDK 00:07:04.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:04.093 EAL: Selected IOVA mode 'VA' 00:07:04.093 EAL: VFIO support initialized 00:07:04.093 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:04.093 00:07:04.093 00:07:04.093 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.093 http://cunit.sourceforge.net/ 00:07:04.093 00:07:04.093 00:07:04.093 Suite: memory 00:07:04.093 Test: test ... 00:07:04.093 register 0x200000200000 2097152 00:07:04.093 malloc 3145728 00:07:04.093 register 0x200000400000 4194304 00:07:04.093 buf 0x200000500000 len 3145728 PASSED 00:07:04.093 malloc 64 00:07:04.093 buf 0x2000004fff40 len 64 PASSED 00:07:04.093 malloc 4194304 00:07:04.093 register 0x200000800000 6291456 00:07:04.093 buf 0x200000a00000 len 4194304 PASSED 00:07:04.093 free 0x200000500000 3145728 00:07:04.093 free 0x2000004fff40 64 00:07:04.093 unregister 0x200000400000 4194304 PASSED 00:07:04.093 free 0x200000a00000 4194304 00:07:04.093 unregister 0x200000800000 6291456 PASSED 00:07:04.093 malloc 8388608 00:07:04.093 register 0x200000400000 10485760 00:07:04.093 buf 0x200000600000 len 8388608 PASSED 00:07:04.093 free 0x200000600000 8388608 00:07:04.093 unregister 0x200000400000 10485760 PASSED 00:07:04.093 passed 00:07:04.093 00:07:04.093 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.093 suites 1 1 n/a 0 0 00:07:04.093 tests 1 1 1 0 0 00:07:04.093 asserts 15 15 15 0 n/a 00:07:04.093 00:07:04.093 Elapsed time = 0.008 seconds 00:07:04.093 00:07:04.093 real 0m0.056s 00:07:04.093 user 0m0.019s 00:07:04.093 sys 0m0.037s 00:07:04.093 10:17:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.093 10:17:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 ************************************ 00:07:04.093 END TEST env_mem_callbacks 00:07:04.093 ************************************ 00:07:04.093 00:07:04.093 real 0m6.824s 00:07:04.093 user 0m4.557s 00:07:04.093 sys 0m1.339s 00:07:04.093 10:17:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.093 10:17:41 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 ************************************ 00:07:04.093 END TEST env 00:07:04.093 ************************************ 00:07:04.093 10:17:41 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:04.093 10:17:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.093 10:17:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.093 10:17:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.351 ************************************ 00:07:04.351 START TEST rpc 00:07:04.351 ************************************ 00:07:04.351 10:17:41 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:04.351 * Looking for test storage... 00:07:04.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:04.351 10:17:41 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.351 10:17:41 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.351 10:17:41 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.351 10:17:41 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.351 10:17:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.351 10:17:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.351 10:17:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.351 10:17:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.351 10:17:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.351 10:17:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.351 10:17:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.351 10:17:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.351 10:17:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.351 10:17:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.351 10:17:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.351 10:17:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:04.351 10:17:41 rpc -- scripts/common.sh@345 -- # : 1 00:07:04.352 10:17:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.352 10:17:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.352 10:17:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:04.352 10:17:41 rpc -- scripts/common.sh@353 -- # local d=1 00:07:04.352 10:17:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.352 10:17:42 rpc -- scripts/common.sh@355 -- # echo 1 00:07:04.352 10:17:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.352 10:17:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:04.352 10:17:42 rpc -- scripts/common.sh@353 -- # local d=2 00:07:04.352 10:17:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.352 10:17:42 rpc -- scripts/common.sh@355 -- # echo 2 00:07:04.352 10:17:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.352 10:17:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.352 10:17:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.352 10:17:42 rpc -- scripts/common.sh@368 -- # return 0 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.352 --rc genhtml_branch_coverage=1 00:07:04.352 --rc genhtml_function_coverage=1 00:07:04.352 --rc genhtml_legend=1 00:07:04.352 --rc geninfo_all_blocks=1 00:07:04.352 --rc geninfo_unexecuted_blocks=1 00:07:04.352 00:07:04.352 ' 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.352 --rc genhtml_branch_coverage=1 00:07:04.352 --rc genhtml_function_coverage=1 00:07:04.352 --rc genhtml_legend=1 00:07:04.352 --rc geninfo_all_blocks=1 00:07:04.352 --rc geninfo_unexecuted_blocks=1 00:07:04.352 00:07:04.352 ' 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.352 --rc genhtml_branch_coverage=1 00:07:04.352 --rc genhtml_function_coverage=1 00:07:04.352 --rc genhtml_legend=1 00:07:04.352 --rc geninfo_all_blocks=1 00:07:04.352 --rc geninfo_unexecuted_blocks=1 00:07:04.352 00:07:04.352 ' 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.352 --rc genhtml_branch_coverage=1 00:07:04.352 --rc genhtml_function_coverage=1 00:07:04.352 --rc genhtml_legend=1 00:07:04.352 --rc geninfo_all_blocks=1 00:07:04.352 --rc geninfo_unexecuted_blocks=1 00:07:04.352 00:07:04.352 ' 00:07:04.352 10:17:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2463960 00:07:04.352 10:17:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.352 10:17:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:04.352 10:17:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2463960 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@835 -- # '[' -z 2463960 ']' 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.352 10:17:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 [2024-12-09 10:17:42.061412] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:04.352 [2024-12-09 10:17:42.061458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463960 ] 00:07:04.611 [2024-12-09 10:17:42.137330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.611 [2024-12-09 10:17:42.175905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:04.611 [2024-12-09 10:17:42.175942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2463960' to capture a snapshot of events at runtime. 00:07:04.611 [2024-12-09 10:17:42.175949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.611 [2024-12-09 10:17:42.175956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.611 [2024-12-09 10:17:42.175961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2463960 for offline analysis/debug. 00:07:04.611 [2024-12-09 10:17:42.176552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.187 10:17:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.187 10:17:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.187 10:17:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:05.187 10:17:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:05.187 10:17:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:05.187 10:17:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:05.187 10:17:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.187 10:17:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.187 10:17:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.446 ************************************ 00:07:05.446 START TEST rpc_integrity 00:07:05.446 ************************************ 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:05.446 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.446 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:05.446 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:05.446 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:05.446 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.446 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.447 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:05.447 10:17:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:05.447 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.447 10:17:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:05.447 { 00:07:05.447 "name": "Malloc0", 00:07:05.447 "aliases": [ 00:07:05.447 "502d523c-e7af-4e85-a0bb-c529235e7f3a" 00:07:05.447 ], 00:07:05.447 "product_name": "Malloc disk", 00:07:05.447 "block_size": 512, 00:07:05.447 "num_blocks": 16384, 00:07:05.447 "uuid": "502d523c-e7af-4e85-a0bb-c529235e7f3a", 00:07:05.447 "assigned_rate_limits": { 00:07:05.447 "rw_ios_per_sec": 0, 00:07:05.447 "rw_mbytes_per_sec": 0, 00:07:05.447 "r_mbytes_per_sec": 0, 00:07:05.447 "w_mbytes_per_sec": 0 00:07:05.447 }, 00:07:05.447 "claimed": false, 00:07:05.447 "zoned": false, 00:07:05.447 "supported_io_types": { 00:07:05.447 "read": true, 00:07:05.447 "write": true, 00:07:05.447 "unmap": true, 00:07:05.447 "flush": true, 00:07:05.447 "reset": true, 00:07:05.447 "nvme_admin": false, 00:07:05.447 "nvme_io": false, 00:07:05.447 "nvme_io_md": false, 00:07:05.447 "write_zeroes": true, 00:07:05.447 "zcopy": true, 00:07:05.447 "get_zone_info": false, 00:07:05.447 "zone_management": false, 00:07:05.447 "zone_append": false, 00:07:05.447 "compare": false, 00:07:05.447 "compare_and_write": false, 00:07:05.447 "abort": true, 00:07:05.447 "seek_hole": false, 00:07:05.447 "seek_data": false, 00:07:05.447 "copy": true, 00:07:05.447 "nvme_iov_md": false 00:07:05.447 }, 00:07:05.447 "memory_domains": [ 00:07:05.447 { 00:07:05.447 "dma_device_id": "system", 00:07:05.447 "dma_device_type": 1 00:07:05.447 }, 00:07:05.447 { 00:07:05.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.447 "dma_device_type": 2 00:07:05.447 } 00:07:05.447 ], 00:07:05.447 "driver_specific": {} 00:07:05.447 } 00:07:05.447 ]' 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.447 [2024-12-09 10:17:43.058351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:05.447 [2024-12-09 10:17:43.058383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.447 [2024-12-09 10:17:43.058395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e17100 00:07:05.447 [2024-12-09 10:17:43.058401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.447 [2024-12-09 10:17:43.059485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.447 [2024-12-09 10:17:43.059505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:05.447 Passthru0 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:05.447 { 00:07:05.447 "name": "Malloc0", 00:07:05.447 "aliases": [ 00:07:05.447 "502d523c-e7af-4e85-a0bb-c529235e7f3a" 00:07:05.447 ], 00:07:05.447 "product_name": "Malloc disk", 00:07:05.447 "block_size": 512, 00:07:05.447 "num_blocks": 16384, 00:07:05.447 "uuid": "502d523c-e7af-4e85-a0bb-c529235e7f3a", 00:07:05.447 "assigned_rate_limits": { 00:07:05.447 "rw_ios_per_sec": 0, 00:07:05.447 "rw_mbytes_per_sec": 0, 00:07:05.447 "r_mbytes_per_sec": 0, 00:07:05.447 "w_mbytes_per_sec": 0 00:07:05.447 }, 00:07:05.447 "claimed": true, 00:07:05.447 "claim_type": "exclusive_write", 00:07:05.447 "zoned": false, 00:07:05.447 "supported_io_types": { 00:07:05.447 "read": true, 00:07:05.447 "write": true, 00:07:05.447 "unmap": true, 00:07:05.447 "flush": true, 00:07:05.447 "reset": true, 00:07:05.447 "nvme_admin": false, 00:07:05.447 "nvme_io": false, 00:07:05.447 "nvme_io_md": false, 00:07:05.447 "write_zeroes": true, 00:07:05.447 "zcopy": true, 00:07:05.447 "get_zone_info": false, 00:07:05.447 "zone_management": false, 00:07:05.447 "zone_append": false, 00:07:05.447 "compare": false, 00:07:05.447 "compare_and_write": false, 00:07:05.447 "abort": true, 00:07:05.447 "seek_hole": false, 00:07:05.447 "seek_data": false, 00:07:05.447 "copy": true, 00:07:05.447 "nvme_iov_md": false 00:07:05.447 }, 00:07:05.447 "memory_domains": [ 00:07:05.447 { 00:07:05.447 "dma_device_id": "system", 00:07:05.447 "dma_device_type": 1 00:07:05.447 }, 00:07:05.447 { 00:07:05.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.447 "dma_device_type": 2 00:07:05.447 } 00:07:05.447 ], 00:07:05.447 "driver_specific": {} 00:07:05.447 }, 00:07:05.447 { 00:07:05.447 "name": "Passthru0", 00:07:05.447 "aliases": [ 00:07:05.447 "5603fff3-85db-5156-8c8a-46ee207d723c" 00:07:05.447 ], 00:07:05.447 "product_name": "passthru", 00:07:05.447 "block_size": 512, 00:07:05.447 "num_blocks": 16384, 00:07:05.447 "uuid": "5603fff3-85db-5156-8c8a-46ee207d723c", 00:07:05.447 "assigned_rate_limits": { 00:07:05.447 "rw_ios_per_sec": 0, 00:07:05.447 "rw_mbytes_per_sec": 0, 00:07:05.447 "r_mbytes_per_sec": 0, 00:07:05.447 "w_mbytes_per_sec": 0 00:07:05.447 }, 00:07:05.447 "claimed": false, 00:07:05.447 "zoned": false, 00:07:05.447 "supported_io_types": { 00:07:05.447 "read": true, 00:07:05.447 "write": true, 00:07:05.447 "unmap": true, 00:07:05.447 "flush": true, 00:07:05.447 "reset": true, 00:07:05.447 "nvme_admin": false, 00:07:05.447 "nvme_io": false, 00:07:05.447 "nvme_io_md": false, 00:07:05.447 "write_zeroes": true, 00:07:05.447 "zcopy": true, 00:07:05.447 "get_zone_info": false, 00:07:05.447 "zone_management": false, 00:07:05.447 "zone_append": false, 00:07:05.447 "compare": false, 00:07:05.447 "compare_and_write": false, 00:07:05.447 "abort": true, 00:07:05.447 "seek_hole": false, 00:07:05.447 "seek_data": false, 00:07:05.447 "copy": true, 00:07:05.447 "nvme_iov_md": false 00:07:05.447 }, 00:07:05.447 "memory_domains": [ 00:07:05.447 { 00:07:05.447 "dma_device_id": "system", 00:07:05.447 "dma_device_type": 1 00:07:05.447 }, 00:07:05.447 { 00:07:05.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.447 "dma_device_type": 2 00:07:05.447 } 00:07:05.447 ], 00:07:05.447 "driver_specific": { 00:07:05.447 "passthru": { 00:07:05.447 "name": "Passthru0", 00:07:05.447 "base_bdev_name": "Malloc0" 00:07:05.447 } 00:07:05.447 } 00:07:05.447 } 00:07:05.447 ]' 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.447 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.447 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.448 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:05.448 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.448 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.448 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.448 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:05.448 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:05.704 10:17:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:05.704 00:07:05.704 real 0m0.284s 00:07:05.704 user 0m0.173s 00:07:05.704 sys 0m0.043s 00:07:05.704 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.704 10:17:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 ************************************ 00:07:05.704 END TEST rpc_integrity 00:07:05.704 ************************************ 00:07:05.704 10:17:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:05.704 10:17:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.704 10:17:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.704 10:17:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 ************************************ 00:07:05.704 START TEST rpc_plugins 00:07:05.704 ************************************ 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:05.704 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.704 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:05.704 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.704 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:05.704 { 00:07:05.704 "name": "Malloc1", 00:07:05.704 "aliases": [ 00:07:05.705 "b46553f7-c407-4469-9317-95457094e8ba" 00:07:05.705 ], 00:07:05.705 "product_name": "Malloc disk", 00:07:05.705 "block_size": 4096, 00:07:05.705 "num_blocks": 256, 00:07:05.705 "uuid": "b46553f7-c407-4469-9317-95457094e8ba", 00:07:05.705 "assigned_rate_limits": { 00:07:05.705 "rw_ios_per_sec": 0, 00:07:05.705 "rw_mbytes_per_sec": 0, 00:07:05.705 "r_mbytes_per_sec": 0, 00:07:05.705 "w_mbytes_per_sec": 0 00:07:05.705 }, 00:07:05.705 "claimed": false, 00:07:05.705 "zoned": false, 00:07:05.705 "supported_io_types": { 00:07:05.705 "read": true, 00:07:05.705 "write": true, 00:07:05.705 "unmap": true, 00:07:05.705 "flush": true, 00:07:05.705 "reset": true, 00:07:05.705 "nvme_admin": false, 00:07:05.705 "nvme_io": false, 00:07:05.705 "nvme_io_md": false, 00:07:05.705 "write_zeroes": true, 00:07:05.705 "zcopy": true, 00:07:05.705 "get_zone_info": false, 00:07:05.705 "zone_management": false, 00:07:05.705 "zone_append": false, 00:07:05.705 "compare": false, 00:07:05.705 "compare_and_write": false, 00:07:05.705 "abort": true, 00:07:05.705 "seek_hole": false, 00:07:05.705 "seek_data": false, 00:07:05.705 "copy": true, 00:07:05.705 "nvme_iov_md": false 00:07:05.705 }, 00:07:05.705 "memory_domains": [ 00:07:05.705 { 00:07:05.705 "dma_device_id": "system", 00:07:05.705 "dma_device_type": 1 00:07:05.705 }, 00:07:05.705 { 00:07:05.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.705 "dma_device_type": 2 00:07:05.705 } 00:07:05.705 ], 00:07:05.705 "driver_specific": {} 00:07:05.705 } 00:07:05.705 ]' 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:05.705 10:17:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:05.705 00:07:05.705 real 0m0.139s 00:07:05.705 user 0m0.082s 00:07:05.705 sys 0m0.019s 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.705 10:17:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:05.705 ************************************ 00:07:05.705 END TEST rpc_plugins 00:07:05.705 ************************************ 00:07:05.962 10:17:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:05.962 10:17:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.962 10:17:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.962 10:17:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.962 ************************************ 00:07:05.962 START TEST rpc_trace_cmd_test 00:07:05.962 ************************************ 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:05.962 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2463960", 00:07:05.962 "tpoint_group_mask": "0x8", 00:07:05.962 "iscsi_conn": { 00:07:05.962 "mask": "0x2", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "scsi": { 00:07:05.962 "mask": "0x4", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "bdev": { 00:07:05.962 "mask": "0x8", 00:07:05.962 "tpoint_mask": "0xffffffffffffffff" 00:07:05.962 }, 00:07:05.962 "nvmf_rdma": { 00:07:05.962 "mask": "0x10", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "nvmf_tcp": { 00:07:05.962 "mask": "0x20", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "ftl": { 00:07:05.962 "mask": "0x40", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "blobfs": { 00:07:05.962 "mask": "0x80", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "dsa": { 00:07:05.962 "mask": "0x200", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "thread": { 00:07:05.962 "mask": "0x400", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "nvme_pcie": { 00:07:05.962 "mask": "0x800", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "iaa": { 00:07:05.962 "mask": "0x1000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "nvme_tcp": { 00:07:05.962 "mask": "0x2000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "bdev_nvme": { 00:07:05.962 "mask": "0x4000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "sock": { 00:07:05.962 "mask": "0x8000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "blob": { 00:07:05.962 "mask": "0x10000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "bdev_raid": { 00:07:05.962 "mask": "0x20000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 }, 00:07:05.962 "scheduler": { 00:07:05.962 "mask": "0x40000", 00:07:05.962 "tpoint_mask": "0x0" 00:07:05.962 } 00:07:05.962 }' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:05.962 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:06.220 10:17:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:06.220 00:07:06.220 real 0m0.218s 00:07:06.220 user 0m0.178s 00:07:06.220 sys 0m0.032s 00:07:06.220 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.220 10:17:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 ************************************ 00:07:06.220 END TEST rpc_trace_cmd_test 00:07:06.220 ************************************ 00:07:06.220 10:17:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:06.220 10:17:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:06.220 10:17:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:06.220 10:17:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.220 10:17:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.220 10:17:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 ************************************ 00:07:06.220 START TEST rpc_daemon_integrity 00:07:06.220 ************************************ 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.220 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:06.220 { 00:07:06.220 "name": "Malloc2", 00:07:06.220 "aliases": [ 00:07:06.220 "c54eafbb-fbc4-4f1c-adc7-46250375a788" 00:07:06.220 ], 00:07:06.220 "product_name": "Malloc disk", 00:07:06.220 "block_size": 512, 00:07:06.220 "num_blocks": 16384, 00:07:06.220 "uuid": "c54eafbb-fbc4-4f1c-adc7-46250375a788", 00:07:06.220 "assigned_rate_limits": { 00:07:06.220 "rw_ios_per_sec": 0, 00:07:06.221 "rw_mbytes_per_sec": 0, 00:07:06.221 "r_mbytes_per_sec": 0, 00:07:06.221 "w_mbytes_per_sec": 0 00:07:06.221 }, 00:07:06.221 "claimed": false, 00:07:06.221 "zoned": false, 00:07:06.221 "supported_io_types": { 00:07:06.221 "read": true, 00:07:06.221 "write": true, 00:07:06.221 "unmap": true, 00:07:06.221 "flush": true, 00:07:06.221 "reset": true, 00:07:06.221 "nvme_admin": false, 00:07:06.221 "nvme_io": false, 00:07:06.221 "nvme_io_md": false, 00:07:06.221 "write_zeroes": true, 00:07:06.221 "zcopy": true, 00:07:06.221 "get_zone_info": false, 00:07:06.221 "zone_management": false, 00:07:06.221 "zone_append": false, 00:07:06.221 "compare": false, 00:07:06.221 "compare_and_write": false, 00:07:06.221 "abort": true, 00:07:06.221 "seek_hole": false, 00:07:06.221 "seek_data": false, 00:07:06.221 "copy": true, 00:07:06.221 "nvme_iov_md": false 00:07:06.221 }, 00:07:06.221 "memory_domains": [ 00:07:06.221 { 00:07:06.221 "dma_device_id": "system", 00:07:06.221 "dma_device_type": 1 00:07:06.221 }, 00:07:06.221 { 00:07:06.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.221 "dma_device_type": 2 00:07:06.221 } 00:07:06.221 ], 00:07:06.221 "driver_specific": {} 00:07:06.221 } 00:07:06.221 ]' 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.221 [2024-12-09 10:17:43.896618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:06.221 [2024-12-09 10:17:43.896647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.221 [2024-12-09 10:17:43.896659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cd5450 00:07:06.221 [2024-12-09 10:17:43.896665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.221 [2024-12-09 10:17:43.897634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.221 [2024-12-09 10:17:43.897654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:06.221 Passthru0 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:06.221 { 00:07:06.221 "name": "Malloc2", 00:07:06.221 "aliases": [ 00:07:06.221 "c54eafbb-fbc4-4f1c-adc7-46250375a788" 00:07:06.221 ], 00:07:06.221 "product_name": "Malloc disk", 00:07:06.221 "block_size": 512, 00:07:06.221 "num_blocks": 16384, 00:07:06.221 "uuid": "c54eafbb-fbc4-4f1c-adc7-46250375a788", 00:07:06.221 "assigned_rate_limits": { 00:07:06.221 "rw_ios_per_sec": 0, 00:07:06.221 "rw_mbytes_per_sec": 0, 00:07:06.221 "r_mbytes_per_sec": 0, 00:07:06.221 "w_mbytes_per_sec": 0 00:07:06.221 }, 00:07:06.221 "claimed": true, 00:07:06.221 "claim_type": "exclusive_write", 00:07:06.221 "zoned": false, 00:07:06.221 "supported_io_types": { 00:07:06.221 "read": true, 00:07:06.221 "write": true, 00:07:06.221 "unmap": true, 00:07:06.221 "flush": true, 00:07:06.221 "reset": true, 00:07:06.221 "nvme_admin": false, 00:07:06.221 "nvme_io": false, 00:07:06.221 "nvme_io_md": false, 00:07:06.221 "write_zeroes": true, 00:07:06.221 "zcopy": true, 00:07:06.221 "get_zone_info": false, 00:07:06.221 "zone_management": false, 00:07:06.221 "zone_append": false, 00:07:06.221 "compare": false, 00:07:06.221 "compare_and_write": false, 00:07:06.221 "abort": true, 00:07:06.221 "seek_hole": false, 00:07:06.221 "seek_data": false, 00:07:06.221 "copy": true, 00:07:06.221 "nvme_iov_md": false 00:07:06.221 }, 00:07:06.221 "memory_domains": [ 00:07:06.221 { 00:07:06.221 "dma_device_id": "system", 00:07:06.221 "dma_device_type": 1 00:07:06.221 }, 00:07:06.221 { 00:07:06.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.221 "dma_device_type": 2 00:07:06.221 } 00:07:06.221 ], 00:07:06.221 "driver_specific": {} 00:07:06.221 }, 00:07:06.221 { 00:07:06.221 "name": "Passthru0", 00:07:06.221 "aliases": [ 00:07:06.221 "73df12d8-1907-51a0-8bd8-9f438460ad27" 00:07:06.221 ], 00:07:06.221 "product_name": "passthru", 00:07:06.221 "block_size": 512, 00:07:06.221 "num_blocks": 16384, 00:07:06.221 "uuid": "73df12d8-1907-51a0-8bd8-9f438460ad27", 00:07:06.221 "assigned_rate_limits": { 00:07:06.221 "rw_ios_per_sec": 0, 00:07:06.221 "rw_mbytes_per_sec": 0, 00:07:06.221 "r_mbytes_per_sec": 0, 00:07:06.221 "w_mbytes_per_sec": 0 00:07:06.221 }, 00:07:06.221 "claimed": false, 00:07:06.221 "zoned": false, 00:07:06.221 "supported_io_types": { 00:07:06.221 "read": true, 00:07:06.221 "write": true, 00:07:06.221 "unmap": true, 00:07:06.221 "flush": true, 00:07:06.221 "reset": true, 00:07:06.221 "nvme_admin": false, 00:07:06.221 "nvme_io": false, 00:07:06.221 "nvme_io_md": false, 00:07:06.221 "write_zeroes": true, 00:07:06.221 "zcopy": true, 00:07:06.221 "get_zone_info": false, 00:07:06.221 "zone_management": false, 00:07:06.221 "zone_append": false, 00:07:06.221 "compare": false, 00:07:06.221 "compare_and_write": false, 00:07:06.221 "abort": true, 00:07:06.221 "seek_hole": false, 00:07:06.221 "seek_data": false, 00:07:06.221 "copy": true, 00:07:06.221 "nvme_iov_md": false 00:07:06.221 }, 00:07:06.221 "memory_domains": [ 00:07:06.221 { 00:07:06.221 "dma_device_id": "system", 00:07:06.221 "dma_device_type": 1 00:07:06.221 }, 00:07:06.221 { 00:07:06.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.221 "dma_device_type": 2 00:07:06.221 } 00:07:06.221 ], 00:07:06.221 "driver_specific": { 00:07:06.221 "passthru": { 00:07:06.221 "name": "Passthru0", 00:07:06.221 "base_bdev_name": "Malloc2" 00:07:06.221 } 00:07:06.221 } 00:07:06.221 } 00:07:06.221 ]' 00:07:06.221 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:06.480 10:17:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:06.480 10:17:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:06.480 00:07:06.480 real 0m0.280s 00:07:06.480 user 0m0.179s 00:07:06.480 sys 0m0.040s 00:07:06.480 10:17:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.480 10:17:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.480 ************************************ 00:07:06.480 END TEST rpc_daemon_integrity 00:07:06.480 ************************************ 00:07:06.480 10:17:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:06.480 10:17:44 rpc -- rpc/rpc.sh@84 -- # killprocess 2463960 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 2463960 ']' 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@958 -- # kill -0 2463960 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@959 -- # uname 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463960 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463960' 00:07:06.480 killing process with pid 2463960 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@973 -- # kill 2463960 00:07:06.480 10:17:44 rpc -- common/autotest_common.sh@978 -- # wait 2463960 00:07:06.738 00:07:06.738 real 0m2.599s 00:07:06.738 user 0m3.286s 00:07:06.738 sys 0m0.759s 00:07:06.738 10:17:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.738 10:17:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.738 ************************************ 00:07:06.738 END TEST rpc 00:07:06.738 ************************************ 00:07:06.996 10:17:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:06.996 10:17:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.996 10:17:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.996 10:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:06.996 ************************************ 00:07:06.996 START TEST skip_rpc 00:07:06.996 ************************************ 00:07:06.996 10:17:44 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:06.996 * Looking for test storage... 00:07:06.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.996 10:17:44 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:06.996 10:17:44 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.996 10:17:44 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:06.996 10:17:44 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.996 10:17:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.997 10:17:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.997 --rc genhtml_branch_coverage=1 00:07:06.997 --rc genhtml_function_coverage=1 00:07:06.997 --rc genhtml_legend=1 00:07:06.997 --rc geninfo_all_blocks=1 00:07:06.997 --rc geninfo_unexecuted_blocks=1 00:07:06.997 00:07:06.997 ' 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.997 --rc genhtml_branch_coverage=1 00:07:06.997 --rc genhtml_function_coverage=1 00:07:06.997 --rc genhtml_legend=1 00:07:06.997 --rc geninfo_all_blocks=1 00:07:06.997 --rc geninfo_unexecuted_blocks=1 00:07:06.997 00:07:06.997 ' 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.997 --rc genhtml_branch_coverage=1 00:07:06.997 --rc genhtml_function_coverage=1 00:07:06.997 --rc genhtml_legend=1 00:07:06.997 --rc geninfo_all_blocks=1 00:07:06.997 --rc geninfo_unexecuted_blocks=1 00:07:06.997 00:07:06.997 ' 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.997 --rc genhtml_branch_coverage=1 00:07:06.997 --rc genhtml_function_coverage=1 00:07:06.997 --rc genhtml_legend=1 00:07:06.997 --rc geninfo_all_blocks=1 00:07:06.997 --rc geninfo_unexecuted_blocks=1 00:07:06.997 00:07:06.997 ' 00:07:06.997 10:17:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:06.997 10:17:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:06.997 10:17:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.997 10:17:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.997 ************************************ 00:07:06.997 START TEST skip_rpc 00:07:06.997 ************************************ 00:07:06.997 10:17:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:06.997 10:17:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2464607 00:07:06.997 10:17:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.997 10:17:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:06.997 10:17:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:07.255 [2024-12-09 10:17:44.761139] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:07.255 [2024-12-09 10:17:44.761174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464607 ] 00:07:07.255 [2024-12-09 10:17:44.834619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.255 [2024-12-09 10:17:44.874299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2464607 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2464607 ']' 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2464607 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464607 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464607' 00:07:12.653 killing process with pid 2464607 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2464607 00:07:12.653 10:17:49 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2464607 00:07:12.653 00:07:12.653 real 0m5.360s 00:07:12.653 user 0m5.115s 00:07:12.653 sys 0m0.271s 00:07:12.653 10:17:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.653 10:17:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.653 ************************************ 00:07:12.653 END TEST skip_rpc 00:07:12.653 ************************************ 00:07:12.653 10:17:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:12.653 10:17:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.653 10:17:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.653 10:17:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.653 ************************************ 00:07:12.653 START TEST skip_rpc_with_json 00:07:12.653 ************************************ 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2465555 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2465555 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2465555 ']' 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.653 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:12.653 [2024-12-09 10:17:50.182712] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:12.653 [2024-12-09 10:17:50.182748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465555 ] 00:07:12.653 [2024-12-09 10:17:50.241086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.653 [2024-12-09 10:17:50.284507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 [2024-12-09 10:17:50.505873] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:12.911 request: 00:07:12.911 { 00:07:12.911 "trtype": "tcp", 00:07:12.911 "method": "nvmf_get_transports", 00:07:12.911 "req_id": 1 00:07:12.911 } 00:07:12.911 Got JSON-RPC error response 00:07:12.911 response: 00:07:12.911 { 00:07:12.911 "code": -19, 00:07:12.911 "message": "No such device" 00:07:12.911 } 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:12.911 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:12.912 [2024-12-09 10:17:50.514010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.912 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.170 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:13.170 { 00:07:13.170 "subsystems": [ 00:07:13.170 { 00:07:13.170 "subsystem": "fsdev", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "fsdev_set_opts", 00:07:13.170 "params": { 00:07:13.170 "fsdev_io_pool_size": 65535, 00:07:13.170 "fsdev_io_cache_size": 256 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "vfio_user_target", 00:07:13.170 "config": null 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "keyring", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "iobuf", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "iobuf_set_options", 00:07:13.170 "params": { 00:07:13.170 "small_pool_count": 8192, 00:07:13.170 "large_pool_count": 1024, 00:07:13.170 "small_bufsize": 8192, 00:07:13.170 "large_bufsize": 135168, 00:07:13.170 "enable_numa": false 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "sock", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "sock_set_default_impl", 00:07:13.170 "params": { 00:07:13.170 "impl_name": "posix" 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "sock_impl_set_options", 00:07:13.170 "params": { 00:07:13.170 "impl_name": "ssl", 00:07:13.170 "recv_buf_size": 4096, 00:07:13.170 "send_buf_size": 4096, 00:07:13.170 "enable_recv_pipe": true, 00:07:13.170 "enable_quickack": false, 00:07:13.170 "enable_placement_id": 0, 00:07:13.170 "enable_zerocopy_send_server": true, 00:07:13.170 "enable_zerocopy_send_client": false, 00:07:13.170 "zerocopy_threshold": 0, 00:07:13.170 "tls_version": 0, 00:07:13.170 "enable_ktls": false 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "sock_impl_set_options", 00:07:13.170 "params": { 00:07:13.170 "impl_name": "posix", 00:07:13.170 "recv_buf_size": 2097152, 00:07:13.170 "send_buf_size": 2097152, 00:07:13.170 "enable_recv_pipe": true, 00:07:13.170 "enable_quickack": false, 00:07:13.170 "enable_placement_id": 0, 00:07:13.170 "enable_zerocopy_send_server": true, 00:07:13.170 "enable_zerocopy_send_client": false, 00:07:13.170 "zerocopy_threshold": 0, 00:07:13.170 "tls_version": 0, 00:07:13.170 "enable_ktls": false 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "vmd", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "accel", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "accel_set_options", 00:07:13.170 "params": { 00:07:13.170 "small_cache_size": 128, 00:07:13.170 "large_cache_size": 16, 00:07:13.170 "task_count": 2048, 00:07:13.170 "sequence_count": 2048, 00:07:13.170 "buf_count": 2048 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "bdev", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "bdev_set_options", 00:07:13.170 "params": { 00:07:13.170 "bdev_io_pool_size": 65535, 00:07:13.170 "bdev_io_cache_size": 256, 00:07:13.170 "bdev_auto_examine": true, 00:07:13.170 "iobuf_small_cache_size": 128, 00:07:13.170 "iobuf_large_cache_size": 16 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "bdev_raid_set_options", 00:07:13.170 "params": { 00:07:13.170 "process_window_size_kb": 1024, 00:07:13.170 "process_max_bandwidth_mb_sec": 0 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "bdev_iscsi_set_options", 00:07:13.170 "params": { 00:07:13.170 "timeout_sec": 30 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "bdev_nvme_set_options", 00:07:13.170 "params": { 00:07:13.170 "action_on_timeout": "none", 00:07:13.170 "timeout_us": 0, 00:07:13.170 "timeout_admin_us": 0, 00:07:13.170 "keep_alive_timeout_ms": 10000, 00:07:13.170 "arbitration_burst": 0, 00:07:13.170 "low_priority_weight": 0, 00:07:13.170 "medium_priority_weight": 0, 00:07:13.170 "high_priority_weight": 0, 00:07:13.170 "nvme_adminq_poll_period_us": 10000, 00:07:13.170 "nvme_ioq_poll_period_us": 0, 00:07:13.170 "io_queue_requests": 0, 00:07:13.170 "delay_cmd_submit": true, 00:07:13.170 "transport_retry_count": 4, 00:07:13.170 "bdev_retry_count": 3, 00:07:13.170 "transport_ack_timeout": 0, 00:07:13.170 "ctrlr_loss_timeout_sec": 0, 00:07:13.170 "reconnect_delay_sec": 0, 00:07:13.170 "fast_io_fail_timeout_sec": 0, 00:07:13.170 "disable_auto_failback": false, 00:07:13.170 "generate_uuids": false, 00:07:13.170 "transport_tos": 0, 00:07:13.170 "nvme_error_stat": false, 00:07:13.170 "rdma_srq_size": 0, 00:07:13.170 "io_path_stat": false, 00:07:13.170 "allow_accel_sequence": false, 00:07:13.170 "rdma_max_cq_size": 0, 00:07:13.170 "rdma_cm_event_timeout_ms": 0, 00:07:13.170 "dhchap_digests": [ 00:07:13.170 "sha256", 00:07:13.170 "sha384", 00:07:13.170 "sha512" 00:07:13.170 ], 00:07:13.170 "dhchap_dhgroups": [ 00:07:13.170 "null", 00:07:13.170 "ffdhe2048", 00:07:13.170 "ffdhe3072", 00:07:13.170 "ffdhe4096", 00:07:13.170 "ffdhe6144", 00:07:13.170 "ffdhe8192" 00:07:13.170 ] 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "bdev_nvme_set_hotplug", 00:07:13.170 "params": { 00:07:13.170 "period_us": 100000, 00:07:13.170 "enable": false 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "bdev_wait_for_examine" 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "scsi", 00:07:13.170 "config": null 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "scheduler", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "framework_set_scheduler", 00:07:13.170 "params": { 00:07:13.170 "name": "static" 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "vhost_scsi", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "vhost_blk", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "ublk", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "nbd", 00:07:13.170 "config": [] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "nvmf", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "nvmf_set_config", 00:07:13.170 "params": { 00:07:13.170 "discovery_filter": "match_any", 00:07:13.170 "admin_cmd_passthru": { 00:07:13.170 "identify_ctrlr": false 00:07:13.170 }, 00:07:13.170 "dhchap_digests": [ 00:07:13.170 "sha256", 00:07:13.170 "sha384", 00:07:13.170 "sha512" 00:07:13.170 ], 00:07:13.170 "dhchap_dhgroups": [ 00:07:13.170 "null", 00:07:13.170 "ffdhe2048", 00:07:13.170 "ffdhe3072", 00:07:13.170 "ffdhe4096", 00:07:13.170 "ffdhe6144", 00:07:13.170 "ffdhe8192" 00:07:13.170 ] 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "nvmf_set_max_subsystems", 00:07:13.170 "params": { 00:07:13.170 "max_subsystems": 1024 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "nvmf_set_crdt", 00:07:13.170 "params": { 00:07:13.170 "crdt1": 0, 00:07:13.170 "crdt2": 0, 00:07:13.170 "crdt3": 0 00:07:13.170 } 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "method": "nvmf_create_transport", 00:07:13.170 "params": { 00:07:13.170 "trtype": "TCP", 00:07:13.170 "max_queue_depth": 128, 00:07:13.170 "max_io_qpairs_per_ctrlr": 127, 00:07:13.170 "in_capsule_data_size": 4096, 00:07:13.170 "max_io_size": 131072, 00:07:13.170 "io_unit_size": 131072, 00:07:13.170 "max_aq_depth": 128, 00:07:13.170 "num_shared_buffers": 511, 00:07:13.170 "buf_cache_size": 4294967295, 00:07:13.170 "dif_insert_or_strip": false, 00:07:13.170 "zcopy": false, 00:07:13.170 "c2h_success": true, 00:07:13.170 "sock_priority": 0, 00:07:13.170 "abort_timeout_sec": 1, 00:07:13.170 "ack_timeout": 0, 00:07:13.170 "data_wr_pool_size": 0 00:07:13.170 } 00:07:13.170 } 00:07:13.170 ] 00:07:13.170 }, 00:07:13.170 { 00:07:13.170 "subsystem": "iscsi", 00:07:13.170 "config": [ 00:07:13.170 { 00:07:13.170 "method": "iscsi_set_options", 00:07:13.170 "params": { 00:07:13.170 "node_base": "iqn.2016-06.io.spdk", 00:07:13.170 "max_sessions": 128, 00:07:13.170 "max_connections_per_session": 2, 00:07:13.171 "max_queue_depth": 64, 00:07:13.171 "default_time2wait": 2, 00:07:13.171 "default_time2retain": 20, 00:07:13.171 "first_burst_length": 8192, 00:07:13.171 "immediate_data": true, 00:07:13.171 "allow_duplicated_isid": false, 00:07:13.171 "error_recovery_level": 0, 00:07:13.171 "nop_timeout": 60, 00:07:13.171 "nop_in_interval": 30, 00:07:13.171 "disable_chap": false, 00:07:13.171 "require_chap": false, 00:07:13.171 "mutual_chap": false, 00:07:13.171 "chap_group": 0, 00:07:13.171 "max_large_datain_per_connection": 64, 00:07:13.171 "max_r2t_per_connection": 4, 00:07:13.171 "pdu_pool_size": 36864, 00:07:13.171 "immediate_data_pool_size": 16384, 00:07:13.171 "data_out_pool_size": 2048 00:07:13.171 } 00:07:13.171 } 00:07:13.171 ] 00:07:13.171 } 00:07:13.171 ] 00:07:13.171 } 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2465555 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2465555 ']' 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2465555 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465555 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465555' 00:07:13.171 killing process with pid 2465555 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2465555 00:07:13.171 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2465555 00:07:13.428 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2465581 00:07:13.428 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:13.428 10:17:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2465581 ']' 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465581' 00:07:18.706 killing process with pid 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2465581 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:18.706 00:07:18.706 real 0m6.246s 00:07:18.706 user 0m5.963s 00:07:18.706 sys 0m0.575s 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.706 10:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.706 ************************************ 00:07:18.706 END TEST skip_rpc_with_json 00:07:18.706 ************************************ 00:07:18.706 10:17:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:18.706 10:17:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.706 10:17:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.706 10:17:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.965 ************************************ 00:07:18.965 START TEST skip_rpc_with_delay 00:07:18.965 ************************************ 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:18.965 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.965 [2024-12-09 10:17:56.501480] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.966 00:07:18.966 real 0m0.067s 00:07:18.966 user 0m0.045s 00:07:18.966 sys 0m0.022s 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.966 10:17:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:18.966 ************************************ 00:07:18.966 END TEST skip_rpc_with_delay 00:07:18.966 ************************************ 00:07:18.966 10:17:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:18.966 10:17:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:18.966 10:17:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:18.966 10:17:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.966 10:17:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.966 10:17:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.966 ************************************ 00:07:18.966 START TEST exit_on_failed_rpc_init 00:07:18.966 ************************************ 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2466570 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2466570 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2466570 ']' 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.966 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:18.966 [2024-12-09 10:17:56.643083] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:18.966 [2024-12-09 10:17:56.643126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466570 ] 00:07:19.224 [2024-12-09 10:17:56.719248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.224 [2024-12-09 10:17:56.761175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.483 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.483 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:19.483 10:17:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.483 10:17:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.483 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:19.484 10:17:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:19.484 [2024-12-09 10:17:57.037926] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:19.484 [2024-12-09 10:17:57.037972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466773 ] 00:07:19.484 [2024-12-09 10:17:57.113049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.484 [2024-12-09 10:17:57.153352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.484 [2024-12-09 10:17:57.153406] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:19.484 [2024-12-09 10:17:57.153416] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:19.484 [2024-12-09 10:17:57.153422] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2466570 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2466570 ']' 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2466570 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.484 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466570 00:07:19.742 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.742 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.742 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466570' 00:07:19.742 killing process with pid 2466570 00:07:19.742 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2466570 00:07:19.742 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2466570 00:07:20.001 00:07:20.001 real 0m0.956s 00:07:20.001 user 0m1.014s 00:07:20.001 sys 0m0.391s 00:07:20.001 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.001 10:17:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.001 ************************************ 00:07:20.001 END TEST exit_on_failed_rpc_init 00:07:20.001 ************************************ 00:07:20.001 10:17:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:20.001 00:07:20.001 real 0m13.084s 00:07:20.001 user 0m12.342s 00:07:20.001 sys 0m1.536s 00:07:20.001 10:17:57 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.001 10:17:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.001 ************************************ 00:07:20.001 END TEST skip_rpc 00:07:20.001 ************************************ 00:07:20.001 10:17:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:20.001 10:17:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.001 10:17:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.001 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.001 ************************************ 00:07:20.001 START TEST rpc_client 00:07:20.001 ************************************ 00:07:20.001 10:17:57 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:20.260 * Looking for test storage... 00:07:20.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.260 10:17:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.260 --rc genhtml_branch_coverage=1 00:07:20.260 --rc genhtml_function_coverage=1 00:07:20.260 --rc genhtml_legend=1 00:07:20.260 --rc geninfo_all_blocks=1 00:07:20.260 --rc geninfo_unexecuted_blocks=1 00:07:20.260 00:07:20.260 ' 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.260 --rc genhtml_branch_coverage=1 00:07:20.260 --rc genhtml_function_coverage=1 00:07:20.260 --rc genhtml_legend=1 00:07:20.260 --rc geninfo_all_blocks=1 00:07:20.260 --rc geninfo_unexecuted_blocks=1 00:07:20.260 00:07:20.260 ' 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.260 --rc genhtml_branch_coverage=1 00:07:20.260 --rc genhtml_function_coverage=1 00:07:20.260 --rc genhtml_legend=1 00:07:20.260 --rc geninfo_all_blocks=1 00:07:20.260 --rc geninfo_unexecuted_blocks=1 00:07:20.260 00:07:20.260 ' 00:07:20.260 10:17:57 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.260 --rc genhtml_branch_coverage=1 00:07:20.260 --rc genhtml_function_coverage=1 00:07:20.260 --rc genhtml_legend=1 00:07:20.260 --rc geninfo_all_blocks=1 00:07:20.260 --rc geninfo_unexecuted_blocks=1 00:07:20.260 00:07:20.260 ' 00:07:20.261 10:17:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:20.261 OK 00:07:20.261 10:17:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:20.261 00:07:20.261 real 0m0.198s 00:07:20.261 user 0m0.124s 00:07:20.261 sys 0m0.087s 00:07:20.261 10:17:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.261 10:17:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 ************************************ 00:07:20.261 END TEST rpc_client 00:07:20.261 ************************************ 00:07:20.261 10:17:57 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:20.261 10:17:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.261 10:17:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.261 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 ************************************ 00:07:20.261 START TEST json_config 00:07:20.261 ************************************ 00:07:20.261 10:17:57 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:20.520 10:17:57 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.520 10:17:57 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.520 10:17:57 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.520 10:17:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.520 10:17:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.520 10:17:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.520 10:17:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.520 10:17:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.520 10:17:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:20.520 10:17:58 json_config -- scripts/common.sh@345 -- # : 1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.520 10:17:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.520 10:17:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@353 -- # local d=1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.520 10:17:58 json_config -- scripts/common.sh@355 -- # echo 1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.520 10:17:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@353 -- # local d=2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.520 10:17:58 json_config -- scripts/common.sh@355 -- # echo 2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.520 10:17:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.520 10:17:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.520 10:17:58 json_config -- scripts/common.sh@368 -- # return 0 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.520 --rc genhtml_branch_coverage=1 00:07:20.520 --rc genhtml_function_coverage=1 00:07:20.520 --rc genhtml_legend=1 00:07:20.520 --rc geninfo_all_blocks=1 00:07:20.520 --rc geninfo_unexecuted_blocks=1 00:07:20.520 00:07:20.520 ' 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.520 --rc genhtml_branch_coverage=1 00:07:20.520 --rc genhtml_function_coverage=1 00:07:20.520 --rc genhtml_legend=1 00:07:20.520 --rc geninfo_all_blocks=1 00:07:20.520 --rc geninfo_unexecuted_blocks=1 00:07:20.520 00:07:20.520 ' 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.520 --rc genhtml_branch_coverage=1 00:07:20.520 --rc genhtml_function_coverage=1 00:07:20.520 --rc genhtml_legend=1 00:07:20.520 --rc geninfo_all_blocks=1 00:07:20.520 --rc geninfo_unexecuted_blocks=1 00:07:20.520 00:07:20.520 ' 00:07:20.520 10:17:58 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.520 --rc genhtml_branch_coverage=1 00:07:20.520 --rc genhtml_function_coverage=1 00:07:20.520 --rc genhtml_legend=1 00:07:20.520 --rc geninfo_all_blocks=1 00:07:20.520 --rc geninfo_unexecuted_blocks=1 00:07:20.520 00:07:20.520 ' 00:07:20.520 10:17:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.521 10:17:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.521 10:17:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.521 10:17:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.521 10:17:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.521 10:17:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.521 10:17:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.521 10:17:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.521 10:17:58 json_config -- paths/export.sh@5 -- # export PATH 00:07:20.521 10:17:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@51 -- # : 0 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.521 10:17:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:20.521 INFO: JSON configuration test init 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 10:17:58 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:20.521 10:17:58 json_config -- json_config/common.sh@9 -- # local app=target 00:07:20.521 10:17:58 json_config -- json_config/common.sh@10 -- # shift 00:07:20.521 10:17:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:20.521 10:17:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:20.521 10:17:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:20.521 10:17:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.521 10:17:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.521 10:17:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2467036 00:07:20.521 10:17:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:20.521 Waiting for target to run... 00:07:20.521 10:17:58 json_config -- json_config/common.sh@25 -- # waitforlisten 2467036 /var/tmp/spdk_tgt.sock 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 2467036 ']' 00:07:20.521 10:17:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:20.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.521 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 [2024-12-09 10:17:58.171787] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:20.521 [2024-12-09 10:17:58.171849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467036 ] 00:07:21.089 [2024-12-09 10:17:58.626994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.089 [2024-12-09 10:17:58.685743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.346 10:17:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.347 10:17:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:21.347 10:17:59 json_config -- json_config/common.sh@26 -- # echo '' 00:07:21.347 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:21.347 10:17:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.347 10:17:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:21.347 10:17:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.347 10:17:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:21.347 10:17:59 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:21.347 10:17:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:24.631 10:18:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.631 10:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:24.631 10:18:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:24.631 10:18:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@54 -- # sort 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:24.891 10:18:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.891 10:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:24.891 10:18:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.891 10:18:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:24.891 10:18:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:24.891 MallocForNvmf0 00:07:24.891 10:18:02 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:24.891 10:18:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:25.150 MallocForNvmf1 00:07:25.150 10:18:02 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:25.150 10:18:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:25.408 [2024-12-09 10:18:02.943882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.408 10:18:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.408 10:18:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.667 10:18:03 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:25.667 10:18:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:25.667 10:18:03 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:25.667 10:18:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:25.927 10:18:03 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:25.927 10:18:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:26.186 [2024-12-09 10:18:03.658120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:26.186 10:18:03 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:26.186 10:18:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.186 10:18:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.186 10:18:03 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:26.186 10:18:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.186 10:18:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.186 10:18:03 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:26.186 10:18:03 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:26.186 10:18:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:26.446 MallocBdevForConfigChangeCheck 00:07:26.446 10:18:03 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:26.446 10:18:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.446 10:18:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 10:18:03 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:26.446 10:18:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:26.704 10:18:04 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:26.704 INFO: shutting down applications... 00:07:26.704 10:18:04 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:26.704 10:18:04 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:26.704 10:18:04 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:26.704 10:18:04 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:29.229 Calling clear_iscsi_subsystem 00:07:29.229 Calling clear_nvmf_subsystem 00:07:29.229 Calling clear_nbd_subsystem 00:07:29.229 Calling clear_ublk_subsystem 00:07:29.229 Calling clear_vhost_blk_subsystem 00:07:29.229 Calling clear_vhost_scsi_subsystem 00:07:29.229 Calling clear_bdev_subsystem 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@352 -- # break 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:29.229 10:18:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:29.229 10:18:06 json_config -- json_config/common.sh@31 -- # local app=target 00:07:29.229 10:18:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:29.229 10:18:06 json_config -- json_config/common.sh@35 -- # [[ -n 2467036 ]] 00:07:29.229 10:18:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2467036 00:07:29.229 10:18:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:29.229 10:18:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.229 10:18:06 json_config -- json_config/common.sh@41 -- # kill -0 2467036 00:07:29.229 10:18:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:29.795 10:18:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:29.795 10:18:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.795 10:18:07 json_config -- json_config/common.sh@41 -- # kill -0 2467036 00:07:29.795 10:18:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:29.795 10:18:07 json_config -- json_config/common.sh@43 -- # break 00:07:29.795 10:18:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:29.795 10:18:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:29.795 SPDK target shutdown done 00:07:29.795 10:18:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:29.795 INFO: relaunching applications... 00:07:29.795 10:18:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:29.795 10:18:07 json_config -- json_config/common.sh@9 -- # local app=target 00:07:29.795 10:18:07 json_config -- json_config/common.sh@10 -- # shift 00:07:29.795 10:18:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:29.795 10:18:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:29.795 10:18:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:29.795 10:18:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.795 10:18:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.795 10:18:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2468777 00:07:29.795 10:18:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:29.795 10:18:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:29.795 Waiting for target to run... 00:07:29.795 10:18:07 json_config -- json_config/common.sh@25 -- # waitforlisten 2468777 /var/tmp/spdk_tgt.sock 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 2468777 ']' 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:29.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.795 10:18:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.795 [2024-12-09 10:18:07.408206] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:29.795 [2024-12-09 10:18:07.408267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468777 ] 00:07:30.373 [2024-12-09 10:18:07.879069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.373 [2024-12-09 10:18:07.931255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.659 [2024-12-09 10:18:10.965613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.659 [2024-12-09 10:18:10.997938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:34.226 10:18:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.226 10:18:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:34.226 10:18:11 json_config -- json_config/common.sh@26 -- # echo '' 00:07:34.226 00:07:34.226 10:18:11 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:34.226 10:18:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:34.226 INFO: Checking if target configuration is the same... 00:07:34.226 10:18:11 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.226 10:18:11 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:34.226 10:18:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.226 + '[' 2 -ne 2 ']' 00:07:34.226 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:34.226 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:34.226 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.226 +++ basename /dev/fd/62 00:07:34.226 ++ mktemp /tmp/62.XXX 00:07:34.226 + tmp_file_1=/tmp/62.qVS 00:07:34.226 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:34.226 + tmp_file_2=/tmp/spdk_tgt_config.json.NMI 00:07:34.226 + ret=0 00:07:34.226 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.485 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.485 + diff -u /tmp/62.qVS /tmp/spdk_tgt_config.json.NMI 00:07:34.485 + echo 'INFO: JSON config files are the same' 00:07:34.485 INFO: JSON config files are the same 00:07:34.485 + rm /tmp/62.qVS /tmp/spdk_tgt_config.json.NMI 00:07:34.485 + exit 0 00:07:34.485 10:18:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:34.485 10:18:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:34.485 INFO: changing configuration and checking if this can be detected... 00:07:34.485 10:18:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.485 10:18:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.744 10:18:12 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.744 10:18:12 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:34.744 10:18:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.744 + '[' 2 -ne 2 ']' 00:07:34.744 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:34.744 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:34.744 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.744 +++ basename /dev/fd/62 00:07:34.744 ++ mktemp /tmp/62.XXX 00:07:34.744 + tmp_file_1=/tmp/62.QWW 00:07:34.744 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.744 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:34.744 + tmp_file_2=/tmp/spdk_tgt_config.json.LkW 00:07:34.744 + ret=0 00:07:34.744 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:35.004 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:35.004 + diff -u /tmp/62.QWW /tmp/spdk_tgt_config.json.LkW 00:07:35.004 + ret=1 00:07:35.004 + echo '=== Start of file: /tmp/62.QWW ===' 00:07:35.004 + cat /tmp/62.QWW 00:07:35.004 + echo '=== End of file: /tmp/62.QWW ===' 00:07:35.004 + echo '' 00:07:35.004 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LkW ===' 00:07:35.004 + cat /tmp/spdk_tgt_config.json.LkW 00:07:35.004 + echo '=== End of file: /tmp/spdk_tgt_config.json.LkW ===' 00:07:35.004 + echo '' 00:07:35.004 + rm /tmp/62.QWW /tmp/spdk_tgt_config.json.LkW 00:07:35.004 + exit 1 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:35.004 INFO: configuration change detected. 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@324 -- # [[ -n 2468777 ]] 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.004 10:18:12 json_config -- json_config/json_config.sh@330 -- # killprocess 2468777 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@954 -- # '[' -z 2468777 ']' 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@958 -- # kill -0 2468777 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@959 -- # uname 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.004 10:18:12 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468777 00:07:35.264 10:18:12 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.264 10:18:12 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.264 10:18:12 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468777' 00:07:35.264 killing process with pid 2468777 00:07:35.264 10:18:12 json_config -- common/autotest_common.sh@973 -- # kill 2468777 00:07:35.264 10:18:12 json_config -- common/autotest_common.sh@978 -- # wait 2468777 00:07:37.166 10:18:14 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:37.166 10:18:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:37.166 10:18:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.166 10:18:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.166 10:18:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:37.166 10:18:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:37.166 INFO: Success 00:07:37.166 00:07:37.166 real 0m16.857s 00:07:37.166 user 0m17.205s 00:07:37.166 sys 0m2.776s 00:07:37.166 10:18:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.166 10:18:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.166 ************************************ 00:07:37.166 END TEST json_config 00:07:37.166 ************************************ 00:07:37.166 10:18:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:37.167 10:18:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.167 10:18:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.167 10:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.167 ************************************ 00:07:37.167 START TEST json_config_extra_key 00:07:37.167 ************************************ 00:07:37.167 10:18:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.426 10:18:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:37.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.426 --rc genhtml_branch_coverage=1 00:07:37.426 --rc genhtml_function_coverage=1 00:07:37.426 --rc genhtml_legend=1 00:07:37.426 --rc geninfo_all_blocks=1 00:07:37.426 --rc geninfo_unexecuted_blocks=1 00:07:37.426 00:07:37.426 ' 00:07:37.426 10:18:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:37.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.426 --rc genhtml_branch_coverage=1 00:07:37.427 --rc genhtml_function_coverage=1 00:07:37.427 --rc genhtml_legend=1 00:07:37.427 --rc geninfo_all_blocks=1 00:07:37.427 --rc geninfo_unexecuted_blocks=1 00:07:37.427 00:07:37.427 ' 00:07:37.427 10:18:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:37.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.427 --rc genhtml_branch_coverage=1 00:07:37.427 --rc genhtml_function_coverage=1 00:07:37.427 --rc genhtml_legend=1 00:07:37.427 --rc geninfo_all_blocks=1 00:07:37.427 --rc geninfo_unexecuted_blocks=1 00:07:37.427 00:07:37.427 ' 00:07:37.427 10:18:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:37.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.427 --rc genhtml_branch_coverage=1 00:07:37.427 --rc genhtml_function_coverage=1 00:07:37.427 --rc genhtml_legend=1 00:07:37.427 --rc geninfo_all_blocks=1 00:07:37.427 --rc geninfo_unexecuted_blocks=1 00:07:37.427 00:07:37.427 ' 00:07:37.427 10:18:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.427 10:18:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.427 10:18:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.427 10:18:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.427 10:18:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.427 10:18:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.427 10:18:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.427 10:18:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.427 10:18:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.427 10:18:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:37.427 10:18:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.427 10:18:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:37.427 INFO: launching applications... 00:07:37.427 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2470657 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:37.427 Waiting for target to run... 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2470657 /var/tmp/spdk_tgt.sock 00:07:37.427 10:18:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2470657 ']' 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:37.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.427 10:18:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:37.427 [2024-12-09 10:18:15.080878] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:37.427 [2024-12-09 10:18:15.080932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470657 ] 00:07:37.995 [2024-12-09 10:18:15.530595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.995 [2024-12-09 10:18:15.588171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.253 10:18:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.253 10:18:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:38.253 00:07:38.253 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:38.253 INFO: shutting down applications... 00:07:38.253 10:18:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2470657 ]] 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2470657 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2470657 00:07:38.253 10:18:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2470657 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:38.820 10:18:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:38.820 SPDK target shutdown done 00:07:38.820 10:18:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:38.820 Success 00:07:38.820 00:07:38.820 real 0m1.573s 00:07:38.820 user 0m1.191s 00:07:38.820 sys 0m0.573s 00:07:38.820 10:18:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.820 10:18:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.820 ************************************ 00:07:38.820 END TEST json_config_extra_key 00:07:38.820 ************************************ 00:07:38.820 10:18:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:38.820 10:18:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.820 10:18:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.820 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:38.820 ************************************ 00:07:38.820 START TEST alias_rpc 00:07:38.820 ************************************ 00:07:38.820 10:18:16 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:39.079 * Looking for test storage... 00:07:39.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.079 10:18:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.079 10:18:16 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.080 --rc genhtml_branch_coverage=1 00:07:39.080 --rc genhtml_function_coverage=1 00:07:39.080 --rc genhtml_legend=1 00:07:39.080 --rc geninfo_all_blocks=1 00:07:39.080 --rc geninfo_unexecuted_blocks=1 00:07:39.080 00:07:39.080 ' 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.080 --rc genhtml_branch_coverage=1 00:07:39.080 --rc genhtml_function_coverage=1 00:07:39.080 --rc genhtml_legend=1 00:07:39.080 --rc geninfo_all_blocks=1 00:07:39.080 --rc geninfo_unexecuted_blocks=1 00:07:39.080 00:07:39.080 ' 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.080 --rc genhtml_branch_coverage=1 00:07:39.080 --rc genhtml_function_coverage=1 00:07:39.080 --rc genhtml_legend=1 00:07:39.080 --rc geninfo_all_blocks=1 00:07:39.080 --rc geninfo_unexecuted_blocks=1 00:07:39.080 00:07:39.080 ' 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:39.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.080 --rc genhtml_branch_coverage=1 00:07:39.080 --rc genhtml_function_coverage=1 00:07:39.080 --rc genhtml_legend=1 00:07:39.080 --rc geninfo_all_blocks=1 00:07:39.080 --rc geninfo_unexecuted_blocks=1 00:07:39.080 00:07:39.080 ' 00:07:39.080 10:18:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.080 10:18:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2470949 00:07:39.080 10:18:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:39.080 10:18:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2470949 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2470949 ']' 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.080 10:18:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 [2024-12-09 10:18:16.718886] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:39.080 [2024-12-09 10:18:16.718932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470949 ] 00:07:39.080 [2024-12-09 10:18:16.791880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.338 [2024-12-09 10:18:16.832464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.338 10:18:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.338 10:18:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.338 10:18:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:39.596 10:18:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2470949 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2470949 ']' 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2470949 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470949 00:07:39.596 10:18:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.855 10:18:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.855 10:18:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470949' 00:07:39.855 killing process with pid 2470949 00:07:39.855 10:18:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 2470949 00:07:39.855 10:18:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 2470949 00:07:40.114 00:07:40.114 real 0m1.131s 00:07:40.114 user 0m1.153s 00:07:40.114 sys 0m0.409s 00:07:40.114 10:18:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.114 10:18:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.114 ************************************ 00:07:40.114 END TEST alias_rpc 00:07:40.114 ************************************ 00:07:40.114 10:18:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:40.114 10:18:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:40.114 10:18:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.114 10:18:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.114 10:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.114 ************************************ 00:07:40.114 START TEST spdkcli_tcp 00:07:40.114 ************************************ 00:07:40.114 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:40.114 * Looking for test storage... 00:07:40.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:40.114 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.114 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.114 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.373 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.373 10:18:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:40.373 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.373 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.373 --rc genhtml_branch_coverage=1 00:07:40.373 --rc genhtml_function_coverage=1 00:07:40.373 --rc genhtml_legend=1 00:07:40.373 --rc geninfo_all_blocks=1 00:07:40.373 --rc geninfo_unexecuted_blocks=1 00:07:40.373 00:07:40.373 ' 00:07:40.373 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.374 --rc genhtml_branch_coverage=1 00:07:40.374 --rc genhtml_function_coverage=1 00:07:40.374 --rc genhtml_legend=1 00:07:40.374 --rc geninfo_all_blocks=1 00:07:40.374 --rc geninfo_unexecuted_blocks=1 00:07:40.374 00:07:40.374 ' 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.374 --rc genhtml_branch_coverage=1 00:07:40.374 --rc genhtml_function_coverage=1 00:07:40.374 --rc genhtml_legend=1 00:07:40.374 --rc geninfo_all_blocks=1 00:07:40.374 --rc geninfo_unexecuted_blocks=1 00:07:40.374 00:07:40.374 ' 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.374 --rc genhtml_branch_coverage=1 00:07:40.374 --rc genhtml_function_coverage=1 00:07:40.374 --rc genhtml_legend=1 00:07:40.374 --rc geninfo_all_blocks=1 00:07:40.374 --rc geninfo_unexecuted_blocks=1 00:07:40.374 00:07:40.374 ' 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2471238 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:40.374 10:18:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2471238 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2471238 ']' 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.374 10:18:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.374 [2024-12-09 10:18:17.925600] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:40.374 [2024-12-09 10:18:17.925648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471238 ] 00:07:40.374 [2024-12-09 10:18:18.002428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.374 [2024-12-09 10:18:18.045406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.374 [2024-12-09 10:18:18.045408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.309 10:18:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.309 10:18:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:41.309 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2471343 00:07:41.309 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:41.309 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:41.309 [ 00:07:41.309 "bdev_malloc_delete", 00:07:41.309 "bdev_malloc_create", 00:07:41.309 "bdev_null_resize", 00:07:41.309 "bdev_null_delete", 00:07:41.309 "bdev_null_create", 00:07:41.309 "bdev_nvme_cuse_unregister", 00:07:41.309 "bdev_nvme_cuse_register", 00:07:41.309 "bdev_opal_new_user", 00:07:41.309 "bdev_opal_set_lock_state", 00:07:41.309 "bdev_opal_delete", 00:07:41.309 "bdev_opal_get_info", 00:07:41.309 "bdev_opal_create", 00:07:41.309 "bdev_nvme_opal_revert", 00:07:41.309 "bdev_nvme_opal_init", 00:07:41.309 "bdev_nvme_send_cmd", 00:07:41.309 "bdev_nvme_set_keys", 00:07:41.309 "bdev_nvme_get_path_iostat", 00:07:41.309 "bdev_nvme_get_mdns_discovery_info", 00:07:41.309 "bdev_nvme_stop_mdns_discovery", 00:07:41.309 "bdev_nvme_start_mdns_discovery", 00:07:41.309 "bdev_nvme_set_multipath_policy", 00:07:41.309 "bdev_nvme_set_preferred_path", 00:07:41.309 "bdev_nvme_get_io_paths", 00:07:41.309 "bdev_nvme_remove_error_injection", 00:07:41.309 "bdev_nvme_add_error_injection", 00:07:41.309 "bdev_nvme_get_discovery_info", 00:07:41.309 "bdev_nvme_stop_discovery", 00:07:41.309 "bdev_nvme_start_discovery", 00:07:41.309 "bdev_nvme_get_controller_health_info", 00:07:41.309 "bdev_nvme_disable_controller", 00:07:41.309 "bdev_nvme_enable_controller", 00:07:41.309 "bdev_nvme_reset_controller", 00:07:41.309 "bdev_nvme_get_transport_statistics", 00:07:41.309 "bdev_nvme_apply_firmware", 00:07:41.309 "bdev_nvme_detach_controller", 00:07:41.309 "bdev_nvme_get_controllers", 00:07:41.309 "bdev_nvme_attach_controller", 00:07:41.309 "bdev_nvme_set_hotplug", 00:07:41.309 "bdev_nvme_set_options", 00:07:41.309 "bdev_passthru_delete", 00:07:41.309 "bdev_passthru_create", 00:07:41.309 "bdev_lvol_set_parent_bdev", 00:07:41.309 "bdev_lvol_set_parent", 00:07:41.309 "bdev_lvol_check_shallow_copy", 00:07:41.309 "bdev_lvol_start_shallow_copy", 00:07:41.309 "bdev_lvol_grow_lvstore", 00:07:41.309 "bdev_lvol_get_lvols", 00:07:41.309 "bdev_lvol_get_lvstores", 00:07:41.309 "bdev_lvol_delete", 00:07:41.309 "bdev_lvol_set_read_only", 00:07:41.309 "bdev_lvol_resize", 00:07:41.309 "bdev_lvol_decouple_parent", 00:07:41.309 "bdev_lvol_inflate", 00:07:41.309 "bdev_lvol_rename", 00:07:41.309 "bdev_lvol_clone_bdev", 00:07:41.309 "bdev_lvol_clone", 00:07:41.309 "bdev_lvol_snapshot", 00:07:41.309 "bdev_lvol_create", 00:07:41.309 "bdev_lvol_delete_lvstore", 00:07:41.309 "bdev_lvol_rename_lvstore", 00:07:41.309 "bdev_lvol_create_lvstore", 00:07:41.309 "bdev_raid_set_options", 00:07:41.309 "bdev_raid_remove_base_bdev", 00:07:41.309 "bdev_raid_add_base_bdev", 00:07:41.309 "bdev_raid_delete", 00:07:41.309 "bdev_raid_create", 00:07:41.309 "bdev_raid_get_bdevs", 00:07:41.309 "bdev_error_inject_error", 00:07:41.309 "bdev_error_delete", 00:07:41.309 "bdev_error_create", 00:07:41.309 "bdev_split_delete", 00:07:41.309 "bdev_split_create", 00:07:41.309 "bdev_delay_delete", 00:07:41.309 "bdev_delay_create", 00:07:41.309 "bdev_delay_update_latency", 00:07:41.309 "bdev_zone_block_delete", 00:07:41.309 "bdev_zone_block_create", 00:07:41.309 "blobfs_create", 00:07:41.309 "blobfs_detect", 00:07:41.309 "blobfs_set_cache_size", 00:07:41.309 "bdev_aio_delete", 00:07:41.309 "bdev_aio_rescan", 00:07:41.309 "bdev_aio_create", 00:07:41.309 "bdev_ftl_set_property", 00:07:41.309 "bdev_ftl_get_properties", 00:07:41.309 "bdev_ftl_get_stats", 00:07:41.309 "bdev_ftl_unmap", 00:07:41.309 "bdev_ftl_unload", 00:07:41.309 "bdev_ftl_delete", 00:07:41.309 "bdev_ftl_load", 00:07:41.309 "bdev_ftl_create", 00:07:41.309 "bdev_virtio_attach_controller", 00:07:41.309 "bdev_virtio_scsi_get_devices", 00:07:41.309 "bdev_virtio_detach_controller", 00:07:41.309 "bdev_virtio_blk_set_hotplug", 00:07:41.309 "bdev_iscsi_delete", 00:07:41.309 "bdev_iscsi_create", 00:07:41.309 "bdev_iscsi_set_options", 00:07:41.309 "accel_error_inject_error", 00:07:41.309 "ioat_scan_accel_module", 00:07:41.309 "dsa_scan_accel_module", 00:07:41.309 "iaa_scan_accel_module", 00:07:41.309 "vfu_virtio_create_fs_endpoint", 00:07:41.309 "vfu_virtio_create_scsi_endpoint", 00:07:41.309 "vfu_virtio_scsi_remove_target", 00:07:41.309 "vfu_virtio_scsi_add_target", 00:07:41.309 "vfu_virtio_create_blk_endpoint", 00:07:41.309 "vfu_virtio_delete_endpoint", 00:07:41.309 "keyring_file_remove_key", 00:07:41.309 "keyring_file_add_key", 00:07:41.309 "keyring_linux_set_options", 00:07:41.309 "fsdev_aio_delete", 00:07:41.309 "fsdev_aio_create", 00:07:41.309 "iscsi_get_histogram", 00:07:41.309 "iscsi_enable_histogram", 00:07:41.309 "iscsi_set_options", 00:07:41.309 "iscsi_get_auth_groups", 00:07:41.309 "iscsi_auth_group_remove_secret", 00:07:41.309 "iscsi_auth_group_add_secret", 00:07:41.309 "iscsi_delete_auth_group", 00:07:41.309 "iscsi_create_auth_group", 00:07:41.309 "iscsi_set_discovery_auth", 00:07:41.309 "iscsi_get_options", 00:07:41.309 "iscsi_target_node_request_logout", 00:07:41.309 "iscsi_target_node_set_redirect", 00:07:41.309 "iscsi_target_node_set_auth", 00:07:41.309 "iscsi_target_node_add_lun", 00:07:41.309 "iscsi_get_stats", 00:07:41.309 "iscsi_get_connections", 00:07:41.309 "iscsi_portal_group_set_auth", 00:07:41.309 "iscsi_start_portal_group", 00:07:41.309 "iscsi_delete_portal_group", 00:07:41.309 "iscsi_create_portal_group", 00:07:41.309 "iscsi_get_portal_groups", 00:07:41.309 "iscsi_delete_target_node", 00:07:41.309 "iscsi_target_node_remove_pg_ig_maps", 00:07:41.309 "iscsi_target_node_add_pg_ig_maps", 00:07:41.309 "iscsi_create_target_node", 00:07:41.309 "iscsi_get_target_nodes", 00:07:41.309 "iscsi_delete_initiator_group", 00:07:41.309 "iscsi_initiator_group_remove_initiators", 00:07:41.309 "iscsi_initiator_group_add_initiators", 00:07:41.309 "iscsi_create_initiator_group", 00:07:41.309 "iscsi_get_initiator_groups", 00:07:41.309 "nvmf_set_crdt", 00:07:41.309 "nvmf_set_config", 00:07:41.309 "nvmf_set_max_subsystems", 00:07:41.309 "nvmf_stop_mdns_prr", 00:07:41.309 "nvmf_publish_mdns_prr", 00:07:41.309 "nvmf_subsystem_get_listeners", 00:07:41.309 "nvmf_subsystem_get_qpairs", 00:07:41.309 "nvmf_subsystem_get_controllers", 00:07:41.309 "nvmf_get_stats", 00:07:41.309 "nvmf_get_transports", 00:07:41.309 "nvmf_create_transport", 00:07:41.309 "nvmf_get_targets", 00:07:41.309 "nvmf_delete_target", 00:07:41.309 "nvmf_create_target", 00:07:41.309 "nvmf_subsystem_allow_any_host", 00:07:41.309 "nvmf_subsystem_set_keys", 00:07:41.309 "nvmf_subsystem_remove_host", 00:07:41.310 "nvmf_subsystem_add_host", 00:07:41.310 "nvmf_ns_remove_host", 00:07:41.310 "nvmf_ns_add_host", 00:07:41.310 "nvmf_subsystem_remove_ns", 00:07:41.310 "nvmf_subsystem_set_ns_ana_group", 00:07:41.310 "nvmf_subsystem_add_ns", 00:07:41.310 "nvmf_subsystem_listener_set_ana_state", 00:07:41.310 "nvmf_discovery_get_referrals", 00:07:41.310 "nvmf_discovery_remove_referral", 00:07:41.310 "nvmf_discovery_add_referral", 00:07:41.310 "nvmf_subsystem_remove_listener", 00:07:41.310 "nvmf_subsystem_add_listener", 00:07:41.310 "nvmf_delete_subsystem", 00:07:41.310 "nvmf_create_subsystem", 00:07:41.310 "nvmf_get_subsystems", 00:07:41.310 "env_dpdk_get_mem_stats", 00:07:41.310 "nbd_get_disks", 00:07:41.310 "nbd_stop_disk", 00:07:41.310 "nbd_start_disk", 00:07:41.310 "ublk_recover_disk", 00:07:41.310 "ublk_get_disks", 00:07:41.310 "ublk_stop_disk", 00:07:41.310 "ublk_start_disk", 00:07:41.310 "ublk_destroy_target", 00:07:41.310 "ublk_create_target", 00:07:41.310 "virtio_blk_create_transport", 00:07:41.310 "virtio_blk_get_transports", 00:07:41.310 "vhost_controller_set_coalescing", 00:07:41.310 "vhost_get_controllers", 00:07:41.310 "vhost_delete_controller", 00:07:41.310 "vhost_create_blk_controller", 00:07:41.310 "vhost_scsi_controller_remove_target", 00:07:41.310 "vhost_scsi_controller_add_target", 00:07:41.310 "vhost_start_scsi_controller", 00:07:41.310 "vhost_create_scsi_controller", 00:07:41.310 "thread_set_cpumask", 00:07:41.310 "scheduler_set_options", 00:07:41.310 "framework_get_governor", 00:07:41.310 "framework_get_scheduler", 00:07:41.310 "framework_set_scheduler", 00:07:41.310 "framework_get_reactors", 00:07:41.310 "thread_get_io_channels", 00:07:41.310 "thread_get_pollers", 00:07:41.310 "thread_get_stats", 00:07:41.310 "framework_monitor_context_switch", 00:07:41.310 "spdk_kill_instance", 00:07:41.310 "log_enable_timestamps", 00:07:41.310 "log_get_flags", 00:07:41.310 "log_clear_flag", 00:07:41.310 "log_set_flag", 00:07:41.310 "log_get_level", 00:07:41.310 "log_set_level", 00:07:41.310 "log_get_print_level", 00:07:41.310 "log_set_print_level", 00:07:41.310 "framework_enable_cpumask_locks", 00:07:41.310 "framework_disable_cpumask_locks", 00:07:41.310 "framework_wait_init", 00:07:41.310 "framework_start_init", 00:07:41.310 "scsi_get_devices", 00:07:41.310 "bdev_get_histogram", 00:07:41.310 "bdev_enable_histogram", 00:07:41.310 "bdev_set_qos_limit", 00:07:41.310 "bdev_set_qd_sampling_period", 00:07:41.310 "bdev_get_bdevs", 00:07:41.310 "bdev_reset_iostat", 00:07:41.310 "bdev_get_iostat", 00:07:41.310 "bdev_examine", 00:07:41.310 "bdev_wait_for_examine", 00:07:41.310 "bdev_set_options", 00:07:41.310 "accel_get_stats", 00:07:41.310 "accel_set_options", 00:07:41.310 "accel_set_driver", 00:07:41.310 "accel_crypto_key_destroy", 00:07:41.310 "accel_crypto_keys_get", 00:07:41.310 "accel_crypto_key_create", 00:07:41.310 "accel_assign_opc", 00:07:41.310 "accel_get_module_info", 00:07:41.310 "accel_get_opc_assignments", 00:07:41.310 "vmd_rescan", 00:07:41.310 "vmd_remove_device", 00:07:41.310 "vmd_enable", 00:07:41.310 "sock_get_default_impl", 00:07:41.310 "sock_set_default_impl", 00:07:41.310 "sock_impl_set_options", 00:07:41.310 "sock_impl_get_options", 00:07:41.310 "iobuf_get_stats", 00:07:41.310 "iobuf_set_options", 00:07:41.310 "keyring_get_keys", 00:07:41.310 "vfu_tgt_set_base_path", 00:07:41.310 "framework_get_pci_devices", 00:07:41.310 "framework_get_config", 00:07:41.310 "framework_get_subsystems", 00:07:41.310 "fsdev_set_opts", 00:07:41.310 "fsdev_get_opts", 00:07:41.310 "trace_get_info", 00:07:41.310 "trace_get_tpoint_group_mask", 00:07:41.310 "trace_disable_tpoint_group", 00:07:41.310 "trace_enable_tpoint_group", 00:07:41.310 "trace_clear_tpoint_mask", 00:07:41.310 "trace_set_tpoint_mask", 00:07:41.310 "notify_get_notifications", 00:07:41.310 "notify_get_types", 00:07:41.310 "spdk_get_version", 00:07:41.310 "rpc_get_methods" 00:07:41.310 ] 00:07:41.310 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.310 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:41.310 10:18:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2471238 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2471238 ']' 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2471238 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.310 10:18:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471238 00:07:41.568 10:18:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.568 10:18:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.568 10:18:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471238' 00:07:41.568 killing process with pid 2471238 00:07:41.568 10:18:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2471238 00:07:41.568 10:18:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2471238 00:07:41.827 00:07:41.827 real 0m1.666s 00:07:41.827 user 0m3.083s 00:07:41.827 sys 0m0.490s 00:07:41.827 10:18:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.827 10:18:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.827 ************************************ 00:07:41.827 END TEST spdkcli_tcp 00:07:41.827 ************************************ 00:07:41.827 10:18:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.827 10:18:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.827 10:18:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.827 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:41.827 ************************************ 00:07:41.827 START TEST dpdk_mem_utility 00:07:41.827 ************************************ 00:07:41.827 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.827 * Looking for test storage... 00:07:41.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:41.827 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.827 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.827 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.086 10:18:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.086 --rc genhtml_branch_coverage=1 00:07:42.086 --rc genhtml_function_coverage=1 00:07:42.086 --rc genhtml_legend=1 00:07:42.086 --rc geninfo_all_blocks=1 00:07:42.086 --rc geninfo_unexecuted_blocks=1 00:07:42.086 00:07:42.086 ' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.086 --rc genhtml_branch_coverage=1 00:07:42.086 --rc genhtml_function_coverage=1 00:07:42.086 --rc genhtml_legend=1 00:07:42.086 --rc geninfo_all_blocks=1 00:07:42.086 --rc geninfo_unexecuted_blocks=1 00:07:42.086 00:07:42.086 ' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.086 --rc genhtml_branch_coverage=1 00:07:42.086 --rc genhtml_function_coverage=1 00:07:42.086 --rc genhtml_legend=1 00:07:42.086 --rc geninfo_all_blocks=1 00:07:42.086 --rc geninfo_unexecuted_blocks=1 00:07:42.086 00:07:42.086 ' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.086 --rc genhtml_branch_coverage=1 00:07:42.086 --rc genhtml_function_coverage=1 00:07:42.086 --rc genhtml_legend=1 00:07:42.086 --rc geninfo_all_blocks=1 00:07:42.086 --rc geninfo_unexecuted_blocks=1 00:07:42.086 00:07:42.086 ' 00:07:42.086 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:42.086 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2471555 00:07:42.086 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2471555 00:07:42.086 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2471555 ']' 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.086 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.086 [2024-12-09 10:18:19.651605] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:42.086 [2024-12-09 10:18:19.651653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471555 ] 00:07:42.086 [2024-12-09 10:18:19.723804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.086 [2024-12-09 10:18:19.765533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.345 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.345 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:42.345 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:42.345 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:42.345 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.345 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.345 { 00:07:42.345 "filename": "/tmp/spdk_mem_dump.txt" 00:07:42.345 } 00:07:42.345 10:18:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.345 10:18:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:42.345 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:42.345 1 heaps totaling size 818.000000 MiB 00:07:42.345 size: 818.000000 MiB heap id: 0 00:07:42.345 end heaps---------- 00:07:42.345 9 mempools totaling size 603.782043 MiB 00:07:42.345 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:42.345 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:42.345 size: 100.555481 MiB name: bdev_io_2471555 00:07:42.345 size: 50.003479 MiB name: msgpool_2471555 00:07:42.345 size: 36.509338 MiB name: fsdev_io_2471555 00:07:42.345 size: 21.763794 MiB name: PDU_Pool 00:07:42.345 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:42.345 size: 4.133484 MiB name: evtpool_2471555 00:07:42.345 size: 0.026123 MiB name: Session_Pool 00:07:42.345 end mempools------- 00:07:42.345 6 memzones totaling size 4.142822 MiB 00:07:42.345 size: 1.000366 MiB name: RG_ring_0_2471555 00:07:42.345 size: 1.000366 MiB name: RG_ring_1_2471555 00:07:42.345 size: 1.000366 MiB name: RG_ring_4_2471555 00:07:42.345 size: 1.000366 MiB name: RG_ring_5_2471555 00:07:42.345 size: 0.125366 MiB name: RG_ring_2_2471555 00:07:42.345 size: 0.015991 MiB name: RG_ring_3_2471555 00:07:42.345 end memzones------- 00:07:42.345 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:42.603 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:42.603 list of free elements. size: 10.852478 MiB 00:07:42.603 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:42.603 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:42.603 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:42.603 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:42.603 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:42.604 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:42.604 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:42.604 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:42.604 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:42.604 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:42.604 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:42.604 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:42.604 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:42.604 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:42.604 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:42.604 list of standard malloc elements. size: 199.218628 MiB 00:07:42.604 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:42.604 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:42.604 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:42.604 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:42.604 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:42.604 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:42.604 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:42.604 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:42.604 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:42.604 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:42.604 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:42.604 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:42.604 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:42.604 list of memzone associated elements. size: 607.928894 MiB 00:07:42.604 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:42.604 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:42.604 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:42.604 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:42.604 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:42.604 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2471555_0 00:07:42.604 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:42.604 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2471555_0 00:07:42.604 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:42.604 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2471555_0 00:07:42.604 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:42.604 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:42.604 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:42.604 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:42.604 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:42.604 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2471555_0 00:07:42.604 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:42.604 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2471555 00:07:42.604 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:42.604 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2471555 00:07:42.604 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:42.604 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:42.604 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:42.604 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:42.604 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:42.604 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:42.604 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:42.604 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:42.604 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:42.604 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2471555 00:07:42.604 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:42.604 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2471555 00:07:42.604 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:42.604 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2471555 00:07:42.604 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:42.604 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2471555 00:07:42.604 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:42.604 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2471555 00:07:42.604 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:42.604 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2471555 00:07:42.604 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:42.604 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:42.604 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:42.604 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:42.604 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:42.604 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:42.604 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:42.604 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2471555 00:07:42.604 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:42.604 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2471555 00:07:42.604 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:42.604 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:42.604 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:42.604 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:42.604 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:42.604 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2471555 00:07:42.604 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:42.604 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:42.604 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:42.604 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2471555 00:07:42.604 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:42.604 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2471555 00:07:42.604 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:42.604 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2471555 00:07:42.604 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:42.604 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:42.604 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:42.604 10:18:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2471555 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2471555 ']' 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2471555 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471555 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471555' 00:07:42.604 killing process with pid 2471555 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2471555 00:07:42.604 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2471555 00:07:42.863 00:07:42.863 real 0m1.026s 00:07:42.863 user 0m0.972s 00:07:42.863 sys 0m0.414s 00:07:42.863 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.863 10:18:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.863 ************************************ 00:07:42.863 END TEST dpdk_mem_utility 00:07:42.863 ************************************ 00:07:42.863 10:18:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:42.863 10:18:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.863 10:18:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.863 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:42.863 ************************************ 00:07:42.863 START TEST event 00:07:42.863 ************************************ 00:07:42.863 10:18:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:43.122 * Looking for test storage... 00:07:43.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:43.122 10:18:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.122 10:18:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.122 10:18:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.122 10:18:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.122 10:18:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.122 10:18:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.122 10:18:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.122 10:18:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.122 10:18:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.122 10:18:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.122 10:18:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.122 10:18:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.122 10:18:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.122 10:18:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.122 10:18:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.122 10:18:20 event -- scripts/common.sh@344 -- # case "$op" in 00:07:43.122 10:18:20 event -- scripts/common.sh@345 -- # : 1 00:07:43.122 10:18:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.122 10:18:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.122 10:18:20 event -- scripts/common.sh@365 -- # decimal 1 00:07:43.123 10:18:20 event -- scripts/common.sh@353 -- # local d=1 00:07:43.123 10:18:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.123 10:18:20 event -- scripts/common.sh@355 -- # echo 1 00:07:43.123 10:18:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.123 10:18:20 event -- scripts/common.sh@366 -- # decimal 2 00:07:43.123 10:18:20 event -- scripts/common.sh@353 -- # local d=2 00:07:43.123 10:18:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.123 10:18:20 event -- scripts/common.sh@355 -- # echo 2 00:07:43.123 10:18:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.123 10:18:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.123 10:18:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.123 10:18:20 event -- scripts/common.sh@368 -- # return 0 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.123 --rc genhtml_branch_coverage=1 00:07:43.123 --rc genhtml_function_coverage=1 00:07:43.123 --rc genhtml_legend=1 00:07:43.123 --rc geninfo_all_blocks=1 00:07:43.123 --rc geninfo_unexecuted_blocks=1 00:07:43.123 00:07:43.123 ' 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.123 --rc genhtml_branch_coverage=1 00:07:43.123 --rc genhtml_function_coverage=1 00:07:43.123 --rc genhtml_legend=1 00:07:43.123 --rc geninfo_all_blocks=1 00:07:43.123 --rc geninfo_unexecuted_blocks=1 00:07:43.123 00:07:43.123 ' 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.123 --rc genhtml_branch_coverage=1 00:07:43.123 --rc genhtml_function_coverage=1 00:07:43.123 --rc genhtml_legend=1 00:07:43.123 --rc geninfo_all_blocks=1 00:07:43.123 --rc geninfo_unexecuted_blocks=1 00:07:43.123 00:07:43.123 ' 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.123 --rc genhtml_branch_coverage=1 00:07:43.123 --rc genhtml_function_coverage=1 00:07:43.123 --rc genhtml_legend=1 00:07:43.123 --rc geninfo_all_blocks=1 00:07:43.123 --rc geninfo_unexecuted_blocks=1 00:07:43.123 00:07:43.123 ' 00:07:43.123 10:18:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:43.123 10:18:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:43.123 10:18:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:43.123 10:18:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.123 10:18:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.123 ************************************ 00:07:43.123 START TEST event_perf 00:07:43.123 ************************************ 00:07:43.123 10:18:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:43.123 Running I/O for 1 seconds...[2024-12-09 10:18:20.747128] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:43.123 [2024-12-09 10:18:20.747198] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471845 ] 00:07:43.123 [2024-12-09 10:18:20.825068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.381 [2024-12-09 10:18:20.869497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.381 [2024-12-09 10:18:20.869607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.381 [2024-12-09 10:18:20.869712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.381 [2024-12-09 10:18:20.869712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.315 Running I/O for 1 seconds... 00:07:44.315 lcore 0: 201399 00:07:44.315 lcore 1: 201399 00:07:44.315 lcore 2: 201400 00:07:44.315 lcore 3: 201400 00:07:44.315 done. 00:07:44.315 00:07:44.315 real 0m1.183s 00:07:44.315 user 0m4.100s 00:07:44.315 sys 0m0.078s 00:07:44.315 10:18:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.315 10:18:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:44.315 ************************************ 00:07:44.315 END TEST event_perf 00:07:44.315 ************************************ 00:07:44.315 10:18:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:44.315 10:18:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:44.315 10:18:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.315 10:18:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.315 ************************************ 00:07:44.315 START TEST event_reactor 00:07:44.315 ************************************ 00:07:44.315 10:18:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:44.315 [2024-12-09 10:18:21.998775] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:44.315 [2024-12-09 10:18:21.998845] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472098 ] 00:07:44.573 [2024-12-09 10:18:22.078851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.573 [2024-12-09 10:18:22.119379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.507 test_start 00:07:45.507 oneshot 00:07:45.507 tick 100 00:07:45.507 tick 100 00:07:45.507 tick 250 00:07:45.507 tick 100 00:07:45.507 tick 100 00:07:45.507 tick 100 00:07:45.507 tick 250 00:07:45.507 tick 500 00:07:45.507 tick 100 00:07:45.507 tick 100 00:07:45.507 tick 250 00:07:45.507 tick 100 00:07:45.507 tick 100 00:07:45.507 test_end 00:07:45.507 00:07:45.507 real 0m1.181s 00:07:45.507 user 0m1.101s 00:07:45.507 sys 0m0.077s 00:07:45.507 10:18:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.507 10:18:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:45.507 ************************************ 00:07:45.507 END TEST event_reactor 00:07:45.507 ************************************ 00:07:45.507 10:18:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.507 10:18:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.507 10:18:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.507 10:18:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.507 ************************************ 00:07:45.507 START TEST event_reactor_perf 00:07:45.507 ************************************ 00:07:45.507 10:18:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.765 [2024-12-09 10:18:23.249427] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:45.765 [2024-12-09 10:18:23.249499] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472344 ] 00:07:45.765 [2024-12-09 10:18:23.332180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.765 [2024-12-09 10:18:23.372836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.700 test_start 00:07:46.700 test_end 00:07:46.700 Performance: 517083 events per second 00:07:46.700 00:07:46.700 real 0m1.185s 00:07:46.700 user 0m1.103s 00:07:46.700 sys 0m0.078s 00:07:46.700 10:18:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.700 10:18:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.700 ************************************ 00:07:46.700 END TEST event_reactor_perf 00:07:46.700 ************************************ 00:07:46.959 10:18:24 event -- event/event.sh@49 -- # uname -s 00:07:46.959 10:18:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:46.959 10:18:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.959 10:18:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.959 10:18:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.959 10:18:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.959 ************************************ 00:07:46.959 START TEST event_scheduler 00:07:46.959 ************************************ 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.959 * Looking for test storage... 00:07:46.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.959 10:18:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.959 --rc genhtml_branch_coverage=1 00:07:46.959 --rc genhtml_function_coverage=1 00:07:46.959 --rc genhtml_legend=1 00:07:46.959 --rc geninfo_all_blocks=1 00:07:46.959 --rc geninfo_unexecuted_blocks=1 00:07:46.959 00:07:46.959 ' 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.959 --rc genhtml_branch_coverage=1 00:07:46.959 --rc genhtml_function_coverage=1 00:07:46.959 --rc genhtml_legend=1 00:07:46.959 --rc geninfo_all_blocks=1 00:07:46.959 --rc geninfo_unexecuted_blocks=1 00:07:46.959 00:07:46.959 ' 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.959 --rc genhtml_branch_coverage=1 00:07:46.959 --rc genhtml_function_coverage=1 00:07:46.959 --rc genhtml_legend=1 00:07:46.959 --rc geninfo_all_blocks=1 00:07:46.959 --rc geninfo_unexecuted_blocks=1 00:07:46.959 00:07:46.959 ' 00:07:46.959 10:18:24 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.959 --rc genhtml_branch_coverage=1 00:07:46.959 --rc genhtml_function_coverage=1 00:07:46.959 --rc genhtml_legend=1 00:07:46.959 --rc geninfo_all_blocks=1 00:07:46.959 --rc geninfo_unexecuted_blocks=1 00:07:46.959 00:07:46.959 ' 00:07:46.959 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:46.959 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2472635 00:07:46.959 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:46.960 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.960 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2472635 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2472635 ']' 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.960 10:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 [2024-12-09 10:18:24.712607] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:47.217 [2024-12-09 10:18:24.712650] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472635 ] 00:07:47.217 [2024-12-09 10:18:24.788284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.217 [2024-12-09 10:18:24.833045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.217 [2024-12-09 10:18:24.833156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.217 [2024-12-09 10:18:24.833261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.217 [2024-12-09 10:18:24.833262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.217 10:18:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.217 10:18:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:47.217 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:47.217 10:18:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.217 10:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 [2024-12-09 10:18:24.865706] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:47.217 [2024-12-09 10:18:24.865723] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:47.218 [2024-12-09 10:18:24.865732] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:47.218 [2024-12-09 10:18:24.865737] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:47.218 [2024-12-09 10:18:24.865742] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:47.218 10:18:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.218 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:47.218 10:18:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.218 10:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.218 [2024-12-09 10:18:24.940279] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:47.218 10:18:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.218 10:18:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:47.476 10:18:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.476 10:18:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.476 10:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 ************************************ 00:07:47.476 START TEST scheduler_create_thread 00:07:47.476 ************************************ 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 2 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 3 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 4 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 5 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 6 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 7 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 8 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 9 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 10 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.476 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.410 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.410 10:18:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:48.410 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.410 10:18:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.785 10:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.785 10:18:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:49.785 10:18:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:49.785 10:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.785 10:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.720 10:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.720 00:07:50.720 real 0m3.381s 00:07:50.720 user 0m0.024s 00:07:50.720 sys 0m0.006s 00:07:50.720 10:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.720 10:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.720 ************************************ 00:07:50.720 END TEST scheduler_create_thread 00:07:50.720 ************************************ 00:07:50.720 10:18:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:50.720 10:18:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2472635 00:07:50.720 10:18:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2472635 ']' 00:07:50.720 10:18:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2472635 00:07:50.720 10:18:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:50.720 10:18:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.720 10:18:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472635 00:07:50.979 10:18:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:50.979 10:18:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:50.979 10:18:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472635' 00:07:50.979 killing process with pid 2472635 00:07:50.979 10:18:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2472635 00:07:50.979 10:18:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2472635 00:07:51.237 [2024-12-09 10:18:28.736198] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:51.237 00:07:51.237 real 0m4.450s 00:07:51.237 user 0m7.741s 00:07:51.237 sys 0m0.381s 00:07:51.237 10:18:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.237 10:18:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:51.237 ************************************ 00:07:51.237 END TEST event_scheduler 00:07:51.237 ************************************ 00:07:51.495 10:18:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:51.495 10:18:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:51.495 10:18:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.495 10:18:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.495 10:18:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:51.495 ************************************ 00:07:51.495 START TEST app_repeat 00:07:51.495 ************************************ 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2473378 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2473378' 00:07:51.495 Process app_repeat pid: 2473378 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:51.495 spdk_app_start Round 0 00:07:51.495 10:18:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473378 /var/tmp/spdk-nbd.sock 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473378 ']' 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.495 10:18:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:51.495 [2024-12-09 10:18:29.053324] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:07:51.495 [2024-12-09 10:18:29.053374] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473378 ] 00:07:51.495 [2024-12-09 10:18:29.130309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.495 [2024-12-09 10:18:29.170192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.495 [2024-12-09 10:18:29.170192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.754 10:18:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.754 10:18:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:51.754 10:18:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.754 Malloc0 00:07:51.754 10:18:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:52.012 Malloc1 00:07:52.012 10:18:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.012 10:18:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:52.270 /dev/nbd0 00:07:52.270 10:18:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:52.270 10:18:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.270 1+0 records in 00:07:52.270 1+0 records out 00:07:52.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143835 s, 28.5 MB/s 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.270 10:18:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:52.270 10:18:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.270 10:18:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.270 10:18:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:52.528 /dev/nbd1 00:07:52.528 10:18:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:52.528 10:18:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:52.528 10:18:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:52.528 10:18:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:52.528 10:18:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.529 1+0 records in 00:07:52.529 1+0 records out 00:07:52.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199672 s, 20.5 MB/s 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.529 10:18:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:52.529 10:18:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.529 10:18:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.529 10:18:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.529 10:18:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.529 10:18:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:52.787 { 00:07:52.787 "nbd_device": "/dev/nbd0", 00:07:52.787 "bdev_name": "Malloc0" 00:07:52.787 }, 00:07:52.787 { 00:07:52.787 "nbd_device": "/dev/nbd1", 00:07:52.787 "bdev_name": "Malloc1" 00:07:52.787 } 00:07:52.787 ]' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:52.787 { 00:07:52.787 "nbd_device": "/dev/nbd0", 00:07:52.787 "bdev_name": "Malloc0" 00:07:52.787 }, 00:07:52.787 { 00:07:52.787 "nbd_device": "/dev/nbd1", 00:07:52.787 "bdev_name": "Malloc1" 00:07:52.787 } 00:07:52.787 ]' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.787 /dev/nbd1' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.787 /dev/nbd1' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:52.787 256+0 records in 00:07:52.787 256+0 records out 00:07:52.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108146 s, 97.0 MB/s 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.787 256+0 records in 00:07:52.787 256+0 records out 00:07:52.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136849 s, 76.6 MB/s 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.787 256+0 records in 00:07:52.787 256+0 records out 00:07:52.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147032 s, 71.3 MB/s 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.787 10:18:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.045 10:18:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.303 10:18:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.571 10:18:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.571 10:18:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:53.829 10:18:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:54.087 [2024-12-09 10:18:31.562082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.087 [2024-12-09 10:18:31.598495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.087 [2024-12-09 10:18:31.598497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.087 [2024-12-09 10:18:31.639074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:54.087 [2024-12-09 10:18:31.639114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:57.372 10:18:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:57.372 10:18:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:57.372 spdk_app_start Round 1 00:07:57.372 10:18:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473378 /var/tmp/spdk-nbd.sock 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473378 ']' 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.372 10:18:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:57.372 10:18:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.372 Malloc0 00:07:57.372 10:18:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.372 Malloc1 00:07:57.372 10:18:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.372 10:18:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.373 10:18:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:57.631 /dev/nbd0 00:07:57.631 10:18:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.631 10:18:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.631 1+0 records in 00:07:57.631 1+0 records out 00:07:57.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220399 s, 18.6 MB/s 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.631 10:18:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:57.631 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.631 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.631 10:18:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:57.889 /dev/nbd1 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.889 1+0 records in 00:07:57.889 1+0 records out 00:07:57.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139854 s, 29.3 MB/s 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.889 10:18:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.889 10:18:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.148 10:18:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.148 { 00:07:58.148 "nbd_device": "/dev/nbd0", 00:07:58.148 "bdev_name": "Malloc0" 00:07:58.148 }, 00:07:58.148 { 00:07:58.148 "nbd_device": "/dev/nbd1", 00:07:58.148 "bdev_name": "Malloc1" 00:07:58.148 } 00:07:58.148 ]' 00:07:58.148 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.148 { 00:07:58.148 "nbd_device": "/dev/nbd0", 00:07:58.148 "bdev_name": "Malloc0" 00:07:58.148 }, 00:07:58.148 { 00:07:58.148 "nbd_device": "/dev/nbd1", 00:07:58.148 "bdev_name": "Malloc1" 00:07:58.149 } 00:07:58.149 ]' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.149 /dev/nbd1' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.149 /dev/nbd1' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:58.149 256+0 records in 00:07:58.149 256+0 records out 00:07:58.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108311 s, 96.8 MB/s 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.149 256+0 records in 00:07:58.149 256+0 records out 00:07:58.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144903 s, 72.4 MB/s 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.149 256+0 records in 00:07:58.149 256+0 records out 00:07:58.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148889 s, 70.4 MB/s 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.149 10:18:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.407 10:18:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.665 10:18:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:58.924 10:18:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:58.924 10:18:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:59.182 10:18:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:59.182 [2024-12-09 10:18:36.885933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.440 [2024-12-09 10:18:36.923576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.440 [2024-12-09 10:18:36.923576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.440 [2024-12-09 10:18:36.964862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:59.440 [2024-12-09 10:18:36.964902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:02.725 10:18:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.725 10:18:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:02.725 spdk_app_start Round 2 00:08:02.725 10:18:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473378 /var/tmp/spdk-nbd.sock 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473378 ']' 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.725 10:18:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:02.725 10:18:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.725 Malloc0 00:08:02.725 10:18:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.725 Malloc1 00:08:02.725 10:18:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.725 10:18:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:02.984 /dev/nbd0 00:08:02.984 10:18:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.984 10:18:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.984 1+0 records in 00:08:02.984 1+0 records out 00:08:02.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243213 s, 16.8 MB/s 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.984 10:18:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:02.984 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.984 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.984 10:18:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:03.242 /dev/nbd1 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.242 1+0 records in 00:08:03.242 1+0 records out 00:08:03.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194103 s, 21.1 MB/s 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.242 10:18:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.242 10:18:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.502 { 00:08:03.502 "nbd_device": "/dev/nbd0", 00:08:03.502 "bdev_name": "Malloc0" 00:08:03.502 }, 00:08:03.502 { 00:08:03.502 "nbd_device": "/dev/nbd1", 00:08:03.502 "bdev_name": "Malloc1" 00:08:03.502 } 00:08:03.502 ]' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.502 { 00:08:03.502 "nbd_device": "/dev/nbd0", 00:08:03.502 "bdev_name": "Malloc0" 00:08:03.502 }, 00:08:03.502 { 00:08:03.502 "nbd_device": "/dev/nbd1", 00:08:03.502 "bdev_name": "Malloc1" 00:08:03.502 } 00:08:03.502 ]' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.502 /dev/nbd1' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.502 /dev/nbd1' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:03.502 256+0 records in 00:08:03.502 256+0 records out 00:08:03.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108152 s, 97.0 MB/s 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.502 256+0 records in 00:08:03.502 256+0 records out 00:08:03.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140835 s, 74.5 MB/s 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.502 256+0 records in 00:08:03.502 256+0 records out 00:08:03.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149818 s, 70.0 MB/s 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.502 10:18:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.761 10:18:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.020 10:18:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:04.280 10:18:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:04.280 10:18:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.540 10:18:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.540 [2024-12-09 10:18:42.155449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.540 [2024-12-09 10:18:42.192131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.540 [2024-12-09 10:18:42.192132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.540 [2024-12-09 10:18:42.233171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.540 [2024-12-09 10:18:42.233213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:07.826 10:18:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2473378 /var/tmp/spdk-nbd.sock 00:08:07.826 10:18:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473378 ']' 00:08:07.826 10:18:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.826 10:18:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.826 10:18:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.826 10:18:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:07.827 10:18:45 event.app_repeat -- event/event.sh@39 -- # killprocess 2473378 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2473378 ']' 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2473378 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473378 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473378' 00:08:07.827 killing process with pid 2473378 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2473378 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2473378 00:08:07.827 spdk_app_start is called in Round 0. 00:08:07.827 Shutdown signal received, stop current app iteration 00:08:07.827 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:08:07.827 spdk_app_start is called in Round 1. 00:08:07.827 Shutdown signal received, stop current app iteration 00:08:07.827 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:08:07.827 spdk_app_start is called in Round 2. 00:08:07.827 Shutdown signal received, stop current app iteration 00:08:07.827 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:08:07.827 spdk_app_start is called in Round 3. 00:08:07.827 Shutdown signal received, stop current app iteration 00:08:07.827 10:18:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:07.827 10:18:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:07.827 00:08:07.827 real 0m16.395s 00:08:07.827 user 0m35.988s 00:08:07.827 sys 0m2.601s 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.827 10:18:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.827 ************************************ 00:08:07.827 END TEST app_repeat 00:08:07.827 ************************************ 00:08:07.827 10:18:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:07.827 10:18:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:07.827 10:18:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.827 10:18:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.827 10:18:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.827 ************************************ 00:08:07.827 START TEST cpu_locks 00:08:07.827 ************************************ 00:08:07.827 10:18:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:08.084 * Looking for test storage... 00:08:08.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.084 10:18:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.084 10:18:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.084 --rc genhtml_branch_coverage=1 00:08:08.084 --rc genhtml_function_coverage=1 00:08:08.084 --rc genhtml_legend=1 00:08:08.084 --rc geninfo_all_blocks=1 00:08:08.084 --rc geninfo_unexecuted_blocks=1 00:08:08.084 00:08:08.084 ' 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.085 --rc genhtml_branch_coverage=1 00:08:08.085 --rc genhtml_function_coverage=1 00:08:08.085 --rc genhtml_legend=1 00:08:08.085 --rc geninfo_all_blocks=1 00:08:08.085 --rc geninfo_unexecuted_blocks=1 00:08:08.085 00:08:08.085 ' 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.085 --rc genhtml_branch_coverage=1 00:08:08.085 --rc genhtml_function_coverage=1 00:08:08.085 --rc genhtml_legend=1 00:08:08.085 --rc geninfo_all_blocks=1 00:08:08.085 --rc geninfo_unexecuted_blocks=1 00:08:08.085 00:08:08.085 ' 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.085 --rc genhtml_branch_coverage=1 00:08:08.085 --rc genhtml_function_coverage=1 00:08:08.085 --rc genhtml_legend=1 00:08:08.085 --rc geninfo_all_blocks=1 00:08:08.085 --rc geninfo_unexecuted_blocks=1 00:08:08.085 00:08:08.085 ' 00:08:08.085 10:18:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:08.085 10:18:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:08.085 10:18:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:08.085 10:18:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.085 10:18:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.085 ************************************ 00:08:08.085 START TEST default_locks 00:08:08.085 ************************************ 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2476371 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2476371 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2476371 ']' 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.085 10:18:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.085 [2024-12-09 10:18:45.747189] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:08.085 [2024-12-09 10:18:45.747231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476371 ] 00:08:08.343 [2024-12-09 10:18:45.822678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.343 [2024-12-09 10:18:45.864367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2476371 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2476371 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.601 lslocks: write error 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2476371 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2476371 ']' 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2476371 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.601 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476371 00:08:08.860 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.860 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.860 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476371' 00:08:08.860 killing process with pid 2476371 00:08:08.860 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2476371 00:08:08.860 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2476371 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2476371 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2476371 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2476371 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2476371 ']' 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2476371) - No such process 00:08:09.119 ERROR: process (pid: 2476371) is no longer running 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:09.119 00:08:09.119 real 0m0.973s 00:08:09.119 user 0m0.912s 00:08:09.119 sys 0m0.456s 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.119 10:18:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.119 ************************************ 00:08:09.119 END TEST default_locks 00:08:09.119 ************************************ 00:08:09.119 10:18:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:09.119 10:18:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.119 10:18:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.119 10:18:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.119 ************************************ 00:08:09.119 START TEST default_locks_via_rpc 00:08:09.119 ************************************ 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2476626 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2476626 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2476626 ']' 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.119 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.120 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.120 10:18:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.120 [2024-12-09 10:18:46.789877] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:09.120 [2024-12-09 10:18:46.789918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476626 ] 00:08:09.379 [2024-12-09 10:18:46.865453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.379 [2024-12-09 10:18:46.907202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2476626 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2476626 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2476626 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2476626 ']' 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2476626 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.637 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476626 00:08:09.896 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.896 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.896 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476626' 00:08:09.896 killing process with pid 2476626 00:08:09.896 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2476626 00:08:09.896 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2476626 00:08:10.154 00:08:10.154 real 0m0.966s 00:08:10.154 user 0m0.923s 00:08:10.154 sys 0m0.435s 00:08:10.154 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.154 10:18:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.154 ************************************ 00:08:10.154 END TEST default_locks_via_rpc 00:08:10.154 ************************************ 00:08:10.154 10:18:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:10.155 10:18:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.155 10:18:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.155 10:18:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.155 ************************************ 00:08:10.155 START TEST non_locking_app_on_locked_coremask 00:08:10.155 ************************************ 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2476781 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2476781 /var/tmp/spdk.sock 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2476781 ']' 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.155 10:18:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.155 [2024-12-09 10:18:47.829035] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:10.155 [2024-12-09 10:18:47.829080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476781 ] 00:08:10.414 [2024-12-09 10:18:47.905914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.414 [2024-12-09 10:18:47.947806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2476894 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2476894 /var/tmp/spdk2.sock 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2476894 ']' 00:08:10.673 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.674 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.674 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.674 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.674 10:18:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.674 [2024-12-09 10:18:48.212872] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:10.674 [2024-12-09 10:18:48.212918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476894 ] 00:08:10.674 [2024-12-09 10:18:48.301787] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.674 [2024-12-09 10:18:48.301814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.674 [2024-12-09 10:18:48.382203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.612 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.612 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.612 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2476781 00:08:11.613 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.613 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2476781 00:08:11.872 lslocks: write error 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2476781 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2476781 ']' 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2476781 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476781 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476781' 00:08:11.872 killing process with pid 2476781 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2476781 00:08:11.872 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2476781 00:08:12.439 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2476894 00:08:12.439 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2476894 ']' 00:08:12.439 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2476894 00:08:12.439 10:18:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476894 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476894' 00:08:12.439 killing process with pid 2476894 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2476894 00:08:12.439 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2476894 00:08:12.698 00:08:12.698 real 0m2.577s 00:08:12.698 user 0m2.694s 00:08:12.698 sys 0m0.843s 00:08:12.699 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.699 10:18:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.699 ************************************ 00:08:12.699 END TEST non_locking_app_on_locked_coremask 00:08:12.699 ************************************ 00:08:12.699 10:18:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:12.699 10:18:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.699 10:18:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.699 10:18:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.959 ************************************ 00:08:12.959 START TEST locking_app_on_unlocked_coremask 00:08:12.959 ************************************ 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2477203 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2477203 /var/tmp/spdk.sock 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477203 ']' 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.959 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.959 [2024-12-09 10:18:50.476530] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:12.959 [2024-12-09 10:18:50.476575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477203 ] 00:08:12.959 [2024-12-09 10:18:50.553417] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.959 [2024-12-09 10:18:50.553443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.959 [2024-12-09 10:18:50.596064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2477388 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2477388 /var/tmp/spdk2.sock 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477388 ']' 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.219 10:18:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.219 [2024-12-09 10:18:50.864695] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:13.219 [2024-12-09 10:18:50.864744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477388 ] 00:08:13.478 [2024-12-09 10:18:50.948259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.478 [2024-12-09 10:18:51.028594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.046 10:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.046 10:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:14.046 10:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2477388 00:08:14.046 10:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2477388 00:08:14.046 10:18:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:14.616 lslocks: write error 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2477203 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477203 ']' 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477203 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477203 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477203' 00:08:14.616 killing process with pid 2477203 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2477203 00:08:14.616 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2477203 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2477388 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477388 ']' 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477388 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477388 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477388' 00:08:15.184 killing process with pid 2477388 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2477388 00:08:15.184 10:18:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2477388 00:08:15.752 00:08:15.752 real 0m2.767s 00:08:15.752 user 0m2.887s 00:08:15.752 sys 0m0.929s 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.752 ************************************ 00:08:15.752 END TEST locking_app_on_unlocked_coremask 00:08:15.752 ************************************ 00:08:15.752 10:18:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:15.752 10:18:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.752 10:18:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.752 10:18:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.752 ************************************ 00:08:15.752 START TEST locking_app_on_locked_coremask 00:08:15.752 ************************************ 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2477740 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2477740 /var/tmp/spdk.sock 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477740 ']' 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.752 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.752 [2024-12-09 10:18:53.313936] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:15.752 [2024-12-09 10:18:53.313977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477740 ] 00:08:15.752 [2024-12-09 10:18:53.387667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.752 [2024-12-09 10:18:53.427238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2477888 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2477888 /var/tmp/spdk2.sock 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2477888 /var/tmp/spdk2.sock 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:16.011 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2477888 /var/tmp/spdk2.sock 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477888 ']' 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:16.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.012 10:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.012 [2024-12-09 10:18:53.705389] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:16.012 [2024-12-09 10:18:53.705435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477888 ] 00:08:16.270 [2024-12-09 10:18:53.792744] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2477740 has claimed it. 00:08:16.270 [2024-12-09 10:18:53.792786] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:16.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2477888) - No such process 00:08:16.902 ERROR: process (pid: 2477888) is no longer running 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2477740 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2477740 00:08:16.902 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:17.160 lslocks: write error 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2477740 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477740 ']' 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477740 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477740 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477740' 00:08:17.160 killing process with pid 2477740 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2477740 00:08:17.160 10:18:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2477740 00:08:17.418 00:08:17.418 real 0m1.819s 00:08:17.418 user 0m1.946s 00:08:17.418 sys 0m0.608s 00:08:17.418 10:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.418 10:18:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.418 ************************************ 00:08:17.418 END TEST locking_app_on_locked_coremask 00:08:17.418 ************************************ 00:08:17.418 10:18:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:17.418 10:18:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.418 10:18:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.418 10:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 ************************************ 00:08:17.676 START TEST locking_overlapped_coremask 00:08:17.676 ************************************ 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2478152 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2478152 /var/tmp/spdk.sock 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478152 ']' 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.676 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 [2024-12-09 10:18:55.191262] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:17.676 [2024-12-09 10:18:55.191300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478152 ] 00:08:17.676 [2024-12-09 10:18:55.265690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.676 [2024-12-09 10:18:55.304728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.676 [2024-12-09 10:18:55.304851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.676 [2024-12-09 10:18:55.304851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2478162 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2478162 /var/tmp/spdk2.sock 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2478162 /var/tmp/spdk2.sock 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2478162 /var/tmp/spdk2.sock 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478162 ']' 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.934 10:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.934 [2024-12-09 10:18:55.571997] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:17.934 [2024-12-09 10:18:55.572042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478162 ] 00:08:18.192 [2024-12-09 10:18:55.664024] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2478152 has claimed it. 00:08:18.192 [2024-12-09 10:18:55.664065] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:18.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2478162) - No such process 00:08:18.759 ERROR: process (pid: 2478162) is no longer running 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2478152 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2478152 ']' 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2478152 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478152 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478152' 00:08:18.759 killing process with pid 2478152 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2478152 00:08:18.759 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2478152 00:08:19.018 00:08:19.018 real 0m1.416s 00:08:19.018 user 0m3.926s 00:08:19.018 sys 0m0.380s 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 ************************************ 00:08:19.018 END TEST locking_overlapped_coremask 00:08:19.018 ************************************ 00:08:19.018 10:18:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:19.018 10:18:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.018 10:18:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.018 10:18:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 ************************************ 00:08:19.018 START TEST locking_overlapped_coremask_via_rpc 00:08:19.018 ************************************ 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2478417 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2478417 /var/tmp/spdk.sock 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478417 ']' 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.018 10:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 [2024-12-09 10:18:56.686327] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:19.018 [2024-12-09 10:18:56.686371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478417 ] 00:08:19.277 [2024-12-09 10:18:56.765665] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:19.277 [2024-12-09 10:18:56.765691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.277 [2024-12-09 10:18:56.809871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.277 [2024-12-09 10:18:56.809892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.277 [2024-12-09 10:18:56.809898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2478577 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2478577 /var/tmp/spdk2.sock 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478577 ']' 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:19.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.843 10:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.101 [2024-12-09 10:18:57.600067] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:20.102 [2024-12-09 10:18:57.600119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478577 ] 00:08:20.102 [2024-12-09 10:18:57.694541] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.102 [2024-12-09 10:18:57.694568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.102 [2024-12-09 10:18:57.781661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.102 [2024-12-09 10:18:57.781773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.102 [2024-12-09 10:18:57.781774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.744 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.123 [2024-12-09 10:18:58.442881] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2478417 has claimed it. 00:08:21.123 request: 00:08:21.123 { 00:08:21.123 "method": "framework_enable_cpumask_locks", 00:08:21.123 "req_id": 1 00:08:21.123 } 00:08:21.123 Got JSON-RPC error response 00:08:21.123 response: 00:08:21.123 { 00:08:21.123 "code": -32603, 00:08:21.123 "message": "Failed to claim CPU core: 2" 00:08:21.123 } 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2478417 /var/tmp/spdk.sock 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478417 ']' 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2478577 /var/tmp/spdk2.sock 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478577 ']' 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.123 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:21.480 00:08:21.480 real 0m2.225s 00:08:21.480 user 0m1.008s 00:08:21.480 sys 0m0.144s 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.480 10:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.480 ************************************ 00:08:21.480 END TEST locking_overlapped_coremask_via_rpc 00:08:21.480 ************************************ 00:08:21.480 10:18:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:21.480 10:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2478417 ]] 00:08:21.480 10:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2478417 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478417 ']' 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478417 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478417 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.480 10:18:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.481 10:18:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478417' 00:08:21.481 killing process with pid 2478417 00:08:21.481 10:18:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2478417 00:08:21.481 10:18:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2478417 00:08:21.759 10:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2478577 ]] 00:08:21.759 10:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2478577 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478577 ']' 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478577 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478577 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478577' 00:08:21.759 killing process with pid 2478577 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2478577 00:08:21.759 10:18:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2478577 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2478417 ]] 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2478417 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478417 ']' 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478417 00:08:22.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2478417) - No such process 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2478417 is not found' 00:08:22.019 Process with pid 2478417 is not found 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2478577 ]] 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2478577 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478577 ']' 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478577 00:08:22.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2478577) - No such process 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2478577 is not found' 00:08:22.019 Process with pid 2478577 is not found 00:08:22.019 10:18:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.019 00:08:22.019 real 0m14.141s 00:08:22.019 user 0m25.521s 00:08:22.019 sys 0m4.777s 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.019 10:18:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 ************************************ 00:08:22.019 END TEST cpu_locks 00:08:22.019 ************************************ 00:08:22.019 00:08:22.019 real 0m39.141s 00:08:22.019 user 1m15.821s 00:08:22.019 sys 0m8.370s 00:08:22.019 10:18:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.019 10:18:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 ************************************ 00:08:22.019 END TEST event 00:08:22.019 ************************************ 00:08:22.019 10:18:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:22.019 10:18:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.019 10:18:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.019 10:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 ************************************ 00:08:22.019 START TEST thread 00:08:22.019 ************************************ 00:08:22.019 10:18:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:22.277 * Looking for test storage... 00:08:22.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.277 10:18:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.277 10:18:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.277 10:18:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.277 10:18:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.277 10:18:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.277 10:18:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.277 10:18:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.277 10:18:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.277 10:18:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.277 10:18:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.277 10:18:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.277 10:18:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:22.277 10:18:59 thread -- scripts/common.sh@345 -- # : 1 00:08:22.277 10:18:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.277 10:18:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.277 10:18:59 thread -- scripts/common.sh@365 -- # decimal 1 00:08:22.277 10:18:59 thread -- scripts/common.sh@353 -- # local d=1 00:08:22.277 10:18:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.277 10:18:59 thread -- scripts/common.sh@355 -- # echo 1 00:08:22.277 10:18:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.277 10:18:59 thread -- scripts/common.sh@366 -- # decimal 2 00:08:22.277 10:18:59 thread -- scripts/common.sh@353 -- # local d=2 00:08:22.277 10:18:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.277 10:18:59 thread -- scripts/common.sh@355 -- # echo 2 00:08:22.277 10:18:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.277 10:18:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.277 10:18:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.277 10:18:59 thread -- scripts/common.sh@368 -- # return 0 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.277 10:18:59 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.278 --rc genhtml_branch_coverage=1 00:08:22.278 --rc genhtml_function_coverage=1 00:08:22.278 --rc genhtml_legend=1 00:08:22.278 --rc geninfo_all_blocks=1 00:08:22.278 --rc geninfo_unexecuted_blocks=1 00:08:22.278 00:08:22.278 ' 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.278 --rc genhtml_branch_coverage=1 00:08:22.278 --rc genhtml_function_coverage=1 00:08:22.278 --rc genhtml_legend=1 00:08:22.278 --rc geninfo_all_blocks=1 00:08:22.278 --rc geninfo_unexecuted_blocks=1 00:08:22.278 00:08:22.278 ' 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.278 --rc genhtml_branch_coverage=1 00:08:22.278 --rc genhtml_function_coverage=1 00:08:22.278 --rc genhtml_legend=1 00:08:22.278 --rc geninfo_all_blocks=1 00:08:22.278 --rc geninfo_unexecuted_blocks=1 00:08:22.278 00:08:22.278 ' 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.278 --rc genhtml_branch_coverage=1 00:08:22.278 --rc genhtml_function_coverage=1 00:08:22.278 --rc genhtml_legend=1 00:08:22.278 --rc geninfo_all_blocks=1 00:08:22.278 --rc geninfo_unexecuted_blocks=1 00:08:22.278 00:08:22.278 ' 00:08:22.278 10:18:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.278 10:18:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.278 ************************************ 00:08:22.278 START TEST thread_poller_perf 00:08:22.278 ************************************ 00:08:22.278 10:18:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.278 [2024-12-09 10:18:59.954621] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:22.278 [2024-12-09 10:18:59.954689] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479002 ] 00:08:22.537 [2024-12-09 10:19:00.035157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.537 [2024-12-09 10:19:00.083370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.537 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:23.470 [2024-12-09T09:19:01.194Z] ====================================== 00:08:23.470 [2024-12-09T09:19:01.194Z] busy:2106009424 (cyc) 00:08:23.470 [2024-12-09T09:19:01.194Z] total_run_count: 417000 00:08:23.470 [2024-12-09T09:19:01.194Z] tsc_hz: 2100000000 (cyc) 00:08:23.470 [2024-12-09T09:19:01.194Z] ====================================== 00:08:23.470 [2024-12-09T09:19:01.194Z] poller_cost: 5050 (cyc), 2404 (nsec) 00:08:23.470 00:08:23.470 real 0m1.195s 00:08:23.470 user 0m1.116s 00:08:23.470 sys 0m0.074s 00:08:23.470 10:19:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.470 10:19:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:23.470 ************************************ 00:08:23.470 END TEST thread_poller_perf 00:08:23.470 ************************************ 00:08:23.470 10:19:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.470 10:19:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:23.470 10:19:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.470 10:19:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.728 ************************************ 00:08:23.728 START TEST thread_poller_perf 00:08:23.728 ************************************ 00:08:23.728 10:19:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.728 [2024-12-09 10:19:01.223171] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:23.728 [2024-12-09 10:19:01.223243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479257 ] 00:08:23.728 [2024-12-09 10:19:01.301158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.728 [2024-12-09 10:19:01.342623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.728 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:24.668 [2024-12-09T09:19:02.392Z] ====================================== 00:08:24.668 [2024-12-09T09:19:02.392Z] busy:2101371752 (cyc) 00:08:24.668 [2024-12-09T09:19:02.392Z] total_run_count: 5170000 00:08:24.668 [2024-12-09T09:19:02.392Z] tsc_hz: 2100000000 (cyc) 00:08:24.668 [2024-12-09T09:19:02.392Z] ====================================== 00:08:24.668 [2024-12-09T09:19:02.392Z] poller_cost: 406 (cyc), 193 (nsec) 00:08:24.668 00:08:24.668 real 0m1.179s 00:08:24.668 user 0m1.097s 00:08:24.668 sys 0m0.078s 00:08:24.668 10:19:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.668 10:19:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.668 ************************************ 00:08:24.668 END TEST thread_poller_perf 00:08:24.668 ************************************ 00:08:24.925 10:19:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:24.925 00:08:24.925 real 0m2.686s 00:08:24.925 user 0m2.370s 00:08:24.925 sys 0m0.330s 00:08:24.925 10:19:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.925 10:19:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.925 ************************************ 00:08:24.925 END TEST thread 00:08:24.925 ************************************ 00:08:24.925 10:19:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:24.926 10:19:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:24.926 10:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.926 10:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.926 10:19:02 -- common/autotest_common.sh@10 -- # set +x 00:08:24.926 ************************************ 00:08:24.926 START TEST app_cmdline 00:08:24.926 ************************************ 00:08:24.926 10:19:02 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:24.926 * Looking for test storage... 00:08:24.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:24.926 10:19:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.926 10:19:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.926 10:19:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:24.926 10:19:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.926 10:19:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.182 10:19:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:25.182 10:19:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.182 10:19:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.182 --rc genhtml_branch_coverage=1 00:08:25.182 --rc genhtml_function_coverage=1 00:08:25.182 --rc genhtml_legend=1 00:08:25.182 --rc geninfo_all_blocks=1 00:08:25.182 --rc geninfo_unexecuted_blocks=1 00:08:25.182 00:08:25.182 ' 00:08:25.182 10:19:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.182 --rc genhtml_branch_coverage=1 00:08:25.182 --rc genhtml_function_coverage=1 00:08:25.182 --rc genhtml_legend=1 00:08:25.182 --rc geninfo_all_blocks=1 00:08:25.182 --rc geninfo_unexecuted_blocks=1 00:08:25.182 00:08:25.182 ' 00:08:25.182 10:19:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.182 --rc genhtml_branch_coverage=1 00:08:25.182 --rc genhtml_function_coverage=1 00:08:25.182 --rc genhtml_legend=1 00:08:25.182 --rc geninfo_all_blocks=1 00:08:25.182 --rc geninfo_unexecuted_blocks=1 00:08:25.182 00:08:25.182 ' 00:08:25.182 10:19:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.182 --rc genhtml_branch_coverage=1 00:08:25.182 --rc genhtml_function_coverage=1 00:08:25.182 --rc genhtml_legend=1 00:08:25.182 --rc geninfo_all_blocks=1 00:08:25.182 --rc geninfo_unexecuted_blocks=1 00:08:25.183 00:08:25.183 ' 00:08:25.183 10:19:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.183 10:19:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2479558 00:08:25.183 10:19:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2479558 00:08:25.183 10:19:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2479558 ']' 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.183 10:19:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.183 [2024-12-09 10:19:02.711466] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:25.183 [2024-12-09 10:19:02.711512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479558 ] 00:08:25.183 [2024-12-09 10:19:02.787708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.183 [2024-12-09 10:19:02.829574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:26.114 { 00:08:26.114 "version": "SPDK v25.01-pre git sha1 496bfd677", 00:08:26.114 "fields": { 00:08:26.114 "major": 25, 00:08:26.114 "minor": 1, 00:08:26.114 "patch": 0, 00:08:26.114 "suffix": "-pre", 00:08:26.114 "commit": "496bfd677" 00:08:26.114 } 00:08:26.114 } 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.114 10:19:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.114 10:19:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.115 10:19:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.373 request: 00:08:26.373 { 00:08:26.373 "method": "env_dpdk_get_mem_stats", 00:08:26.373 "req_id": 1 00:08:26.373 } 00:08:26.373 Got JSON-RPC error response 00:08:26.373 response: 00:08:26.373 { 00:08:26.373 "code": -32601, 00:08:26.373 "message": "Method not found" 00:08:26.373 } 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.373 10:19:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2479558 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2479558 ']' 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2479558 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.373 10:19:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2479558 00:08:26.373 10:19:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.373 10:19:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.373 10:19:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2479558' 00:08:26.373 killing process with pid 2479558 00:08:26.373 10:19:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 2479558 00:08:26.373 10:19:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 2479558 00:08:26.632 00:08:26.632 real 0m1.823s 00:08:26.632 user 0m2.190s 00:08:26.632 sys 0m0.459s 00:08:26.632 10:19:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.632 10:19:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.632 ************************************ 00:08:26.632 END TEST app_cmdline 00:08:26.632 ************************************ 00:08:26.632 10:19:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:26.632 10:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.632 10:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.632 10:19:04 -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 ************************************ 00:08:26.891 START TEST version 00:08:26.891 ************************************ 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:26.891 * Looking for test storage... 00:08:26.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.891 10:19:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.891 10:19:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.891 10:19:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.891 10:19:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.891 10:19:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.891 10:19:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.891 10:19:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.891 10:19:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.891 10:19:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.891 10:19:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.891 10:19:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.891 10:19:04 version -- scripts/common.sh@344 -- # case "$op" in 00:08:26.891 10:19:04 version -- scripts/common.sh@345 -- # : 1 00:08:26.891 10:19:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.891 10:19:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.891 10:19:04 version -- scripts/common.sh@365 -- # decimal 1 00:08:26.891 10:19:04 version -- scripts/common.sh@353 -- # local d=1 00:08:26.891 10:19:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.891 10:19:04 version -- scripts/common.sh@355 -- # echo 1 00:08:26.891 10:19:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.891 10:19:04 version -- scripts/common.sh@366 -- # decimal 2 00:08:26.891 10:19:04 version -- scripts/common.sh@353 -- # local d=2 00:08:26.891 10:19:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.891 10:19:04 version -- scripts/common.sh@355 -- # echo 2 00:08:26.891 10:19:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.891 10:19:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.891 10:19:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.891 10:19:04 version -- scripts/common.sh@368 -- # return 0 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.891 --rc genhtml_branch_coverage=1 00:08:26.891 --rc genhtml_function_coverage=1 00:08:26.891 --rc genhtml_legend=1 00:08:26.891 --rc geninfo_all_blocks=1 00:08:26.891 --rc geninfo_unexecuted_blocks=1 00:08:26.891 00:08:26.891 ' 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.891 --rc genhtml_branch_coverage=1 00:08:26.891 --rc genhtml_function_coverage=1 00:08:26.891 --rc genhtml_legend=1 00:08:26.891 --rc geninfo_all_blocks=1 00:08:26.891 --rc geninfo_unexecuted_blocks=1 00:08:26.891 00:08:26.891 ' 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.891 --rc genhtml_branch_coverage=1 00:08:26.891 --rc genhtml_function_coverage=1 00:08:26.891 --rc genhtml_legend=1 00:08:26.891 --rc geninfo_all_blocks=1 00:08:26.891 --rc geninfo_unexecuted_blocks=1 00:08:26.891 00:08:26.891 ' 00:08:26.891 10:19:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.891 --rc genhtml_branch_coverage=1 00:08:26.891 --rc genhtml_function_coverage=1 00:08:26.891 --rc genhtml_legend=1 00:08:26.891 --rc geninfo_all_blocks=1 00:08:26.891 --rc geninfo_unexecuted_blocks=1 00:08:26.891 00:08:26.891 ' 00:08:26.891 10:19:04 version -- app/version.sh@17 -- # get_header_version major 00:08:26.892 10:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # cut -f2 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.892 10:19:04 version -- app/version.sh@17 -- # major=25 00:08:26.892 10:19:04 version -- app/version.sh@18 -- # get_header_version minor 00:08:26.892 10:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # cut -f2 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.892 10:19:04 version -- app/version.sh@18 -- # minor=1 00:08:26.892 10:19:04 version -- app/version.sh@19 -- # get_header_version patch 00:08:26.892 10:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # cut -f2 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.892 10:19:04 version -- app/version.sh@19 -- # patch=0 00:08:26.892 10:19:04 version -- app/version.sh@20 -- # get_header_version suffix 00:08:26.892 10:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # cut -f2 00:08:26.892 10:19:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:26.892 10:19:04 version -- app/version.sh@20 -- # suffix=-pre 00:08:26.892 10:19:04 version -- app/version.sh@22 -- # version=25.1 00:08:26.892 10:19:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:26.892 10:19:04 version -- app/version.sh@28 -- # version=25.1rc0 00:08:26.892 10:19:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:26.892 10:19:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:27.151 10:19:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:27.151 10:19:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:27.151 00:08:27.151 real 0m0.246s 00:08:27.151 user 0m0.157s 00:08:27.151 sys 0m0.132s 00:08:27.151 10:19:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.151 10:19:04 version -- common/autotest_common.sh@10 -- # set +x 00:08:27.151 ************************************ 00:08:27.151 END TEST version 00:08:27.151 ************************************ 00:08:27.151 10:19:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:27.151 10:19:04 -- spdk/autotest.sh@194 -- # uname -s 00:08:27.151 10:19:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:27.151 10:19:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:27.151 10:19:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:27.151 10:19:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:27.151 10:19:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.151 10:19:04 -- common/autotest_common.sh@10 -- # set +x 00:08:27.151 10:19:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:27.151 10:19:04 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:27.151 10:19:04 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.151 10:19:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.151 10:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.151 10:19:04 -- common/autotest_common.sh@10 -- # set +x 00:08:27.151 ************************************ 00:08:27.151 START TEST nvmf_tcp 00:08:27.151 ************************************ 00:08:27.151 10:19:04 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.151 * Looking for test storage... 00:08:27.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:27.151 10:19:04 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.151 10:19:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.151 10:19:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.410 10:19:04 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.410 --rc genhtml_branch_coverage=1 00:08:27.410 --rc genhtml_function_coverage=1 00:08:27.410 --rc genhtml_legend=1 00:08:27.410 --rc geninfo_all_blocks=1 00:08:27.410 --rc geninfo_unexecuted_blocks=1 00:08:27.410 00:08:27.410 ' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.410 --rc genhtml_branch_coverage=1 00:08:27.410 --rc genhtml_function_coverage=1 00:08:27.410 --rc genhtml_legend=1 00:08:27.410 --rc geninfo_all_blocks=1 00:08:27.410 --rc geninfo_unexecuted_blocks=1 00:08:27.410 00:08:27.410 ' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.410 --rc genhtml_branch_coverage=1 00:08:27.410 --rc genhtml_function_coverage=1 00:08:27.410 --rc genhtml_legend=1 00:08:27.410 --rc geninfo_all_blocks=1 00:08:27.410 --rc geninfo_unexecuted_blocks=1 00:08:27.410 00:08:27.410 ' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.410 --rc genhtml_branch_coverage=1 00:08:27.410 --rc genhtml_function_coverage=1 00:08:27.410 --rc genhtml_legend=1 00:08:27.410 --rc geninfo_all_blocks=1 00:08:27.410 --rc geninfo_unexecuted_blocks=1 00:08:27.410 00:08:27.410 ' 00:08:27.410 10:19:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:27.410 10:19:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.410 10:19:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.410 10:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.410 ************************************ 00:08:27.410 START TEST nvmf_target_core 00:08:27.410 ************************************ 00:08:27.410 10:19:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:27.410 * Looking for test storage... 00:08:27.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.410 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.411 --rc genhtml_branch_coverage=1 00:08:27.411 --rc genhtml_function_coverage=1 00:08:27.411 --rc genhtml_legend=1 00:08:27.411 --rc geninfo_all_blocks=1 00:08:27.411 --rc geninfo_unexecuted_blocks=1 00:08:27.411 00:08:27.411 ' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.411 --rc genhtml_branch_coverage=1 00:08:27.411 --rc genhtml_function_coverage=1 00:08:27.411 --rc genhtml_legend=1 00:08:27.411 --rc geninfo_all_blocks=1 00:08:27.411 --rc geninfo_unexecuted_blocks=1 00:08:27.411 00:08:27.411 ' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.411 --rc genhtml_branch_coverage=1 00:08:27.411 --rc genhtml_function_coverage=1 00:08:27.411 --rc genhtml_legend=1 00:08:27.411 --rc geninfo_all_blocks=1 00:08:27.411 --rc geninfo_unexecuted_blocks=1 00:08:27.411 00:08:27.411 ' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.411 --rc genhtml_branch_coverage=1 00:08:27.411 --rc genhtml_function_coverage=1 00:08:27.411 --rc genhtml_legend=1 00:08:27.411 --rc geninfo_all_blocks=1 00:08:27.411 --rc geninfo_unexecuted_blocks=1 00:08:27.411 00:08:27.411 ' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.411 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.674 ************************************ 00:08:27.674 START TEST nvmf_abort 00:08:27.674 ************************************ 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:27.674 * Looking for test storage... 00:08:27.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.674 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.675 --rc genhtml_branch_coverage=1 00:08:27.675 --rc genhtml_function_coverage=1 00:08:27.675 --rc genhtml_legend=1 00:08:27.675 --rc geninfo_all_blocks=1 00:08:27.675 --rc geninfo_unexecuted_blocks=1 00:08:27.675 00:08:27.675 ' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.675 --rc genhtml_branch_coverage=1 00:08:27.675 --rc genhtml_function_coverage=1 00:08:27.675 --rc genhtml_legend=1 00:08:27.675 --rc geninfo_all_blocks=1 00:08:27.675 --rc geninfo_unexecuted_blocks=1 00:08:27.675 00:08:27.675 ' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.675 --rc genhtml_branch_coverage=1 00:08:27.675 --rc genhtml_function_coverage=1 00:08:27.675 --rc genhtml_legend=1 00:08:27.675 --rc geninfo_all_blocks=1 00:08:27.675 --rc geninfo_unexecuted_blocks=1 00:08:27.675 00:08:27.675 ' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.675 --rc genhtml_branch_coverage=1 00:08:27.675 --rc genhtml_function_coverage=1 00:08:27.675 --rc genhtml_legend=1 00:08:27.675 --rc geninfo_all_blocks=1 00:08:27.675 --rc geninfo_unexecuted_blocks=1 00:08:27.675 00:08:27.675 ' 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.675 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.676 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.676 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.676 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.677 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.678 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.678 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.678 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.940 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.940 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.940 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.940 10:19:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:34.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:34.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:34.503 Found net devices under 0000:86:00.0: cvl_0_0 00:08:34.503 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:34.504 Found net devices under 0000:86:00.1: cvl_0_1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:08:34.504 00:08:34.504 --- 10.0.0.2 ping statistics --- 00:08:34.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.504 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:08:34.504 00:08:34.504 --- 10.0.0.1 ping statistics --- 00:08:34.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.504 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2483311 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2483311 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2483311 ']' 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 [2024-12-09 10:19:11.470318] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:34.504 [2024-12-09 10:19:11.470365] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.504 [2024-12-09 10:19:11.549931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.504 [2024-12-09 10:19:11.592702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.504 [2024-12-09 10:19:11.592741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.504 [2024-12-09 10:19:11.592748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.504 [2024-12-09 10:19:11.592754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.504 [2024-12-09 10:19:11.592759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.504 [2024-12-09 10:19:11.594186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.504 [2024-12-09 10:19:11.594291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.504 [2024-12-09 10:19:11.594292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 [2024-12-09 10:19:11.739866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 Malloc0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 Delay0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 [2024-12-09 10:19:11.820589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.505 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:34.505 [2024-12-09 10:19:11.916477] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:36.409 Initializing NVMe Controllers 00:08:36.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:36.409 controller IO queue size 128 less than required 00:08:36.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:36.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:36.409 Initialization complete. Launching workers. 00:08:36.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37196 00:08:36.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37257, failed to submit 62 00:08:36.409 success 37200, unsuccessful 57, failed 0 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.409 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.409 rmmod nvme_tcp 00:08:36.409 rmmod nvme_fabrics 00:08:36.409 rmmod nvme_keyring 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2483311 ']' 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2483311 ']' 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483311' 00:08:36.668 killing process with pid 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2483311 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.668 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.927 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.927 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.927 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.927 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.830 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.830 00:08:38.830 real 0m11.276s 00:08:38.830 user 0m11.857s 00:08:38.830 sys 0m5.417s 00:08:38.830 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.830 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.830 ************************************ 00:08:38.831 END TEST nvmf_abort 00:08:38.831 ************************************ 00:08:38.831 10:19:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:38.831 10:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.831 10:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.831 10:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.831 ************************************ 00:08:38.831 START TEST nvmf_ns_hotplug_stress 00:08:38.831 ************************************ 00:08:38.831 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:39.090 * Looking for test storage... 00:08:39.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.090 --rc genhtml_branch_coverage=1 00:08:39.090 --rc genhtml_function_coverage=1 00:08:39.090 --rc genhtml_legend=1 00:08:39.090 --rc geninfo_all_blocks=1 00:08:39.090 --rc geninfo_unexecuted_blocks=1 00:08:39.090 00:08:39.090 ' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.090 --rc genhtml_branch_coverage=1 00:08:39.090 --rc genhtml_function_coverage=1 00:08:39.090 --rc genhtml_legend=1 00:08:39.090 --rc geninfo_all_blocks=1 00:08:39.090 --rc geninfo_unexecuted_blocks=1 00:08:39.090 00:08:39.090 ' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.090 --rc genhtml_branch_coverage=1 00:08:39.090 --rc genhtml_function_coverage=1 00:08:39.090 --rc genhtml_legend=1 00:08:39.090 --rc geninfo_all_blocks=1 00:08:39.090 --rc geninfo_unexecuted_blocks=1 00:08:39.090 00:08:39.090 ' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.090 --rc genhtml_branch_coverage=1 00:08:39.090 --rc genhtml_function_coverage=1 00:08:39.090 --rc genhtml_legend=1 00:08:39.090 --rc geninfo_all_blocks=1 00:08:39.090 --rc geninfo_unexecuted_blocks=1 00:08:39.090 00:08:39.090 ' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.090 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.091 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.657 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.657 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.657 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.658 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.658 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:45.658 00:08:45.658 --- 10.0.0.2 ping statistics --- 00:08:45.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.658 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:45.658 00:08:45.658 --- 10.0.0.1 ping statistics --- 00:08:45.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.658 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2487381 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2487381 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2487381 ']' 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.658 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.658 [2024-12-09 10:19:22.839925] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:45.658 [2024-12-09 10:19:22.839970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.658 [2024-12-09 10:19:22.916124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.658 [2024-12-09 10:19:22.955629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.658 [2024-12-09 10:19:22.955666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.658 [2024-12-09 10:19:22.955673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.658 [2024-12-09 10:19:22.955679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.658 [2024-12-09 10:19:22.955684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.658 [2024-12-09 10:19:22.957098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.658 [2024-12-09 10:19:22.957204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.658 [2024-12-09 10:19:22.957206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.658 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.659 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:45.659 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.659 [2024-12-09 10:19:23.267400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.659 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.916 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.174 [2024-12-09 10:19:23.648794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.174 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.174 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:46.432 Malloc0 00:08:46.432 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.691 Delay0 00:08:46.691 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.949 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:46.949 NULL1 00:08:47.206 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:47.206 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:47.206 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2487747 00:08:47.206 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:47.207 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.578 Read completed with error (sct=0, sc=11) 00:08:48.578 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.578 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:48.578 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:48.836 true 00:08:48.836 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:48.836 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.768 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.768 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:49.768 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:50.026 true 00:08:50.026 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:50.026 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.284 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.543 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:50.543 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:50.543 true 00:08:50.543 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:50.543 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.916 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.916 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:51.916 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:52.174 true 00:08:52.174 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:52.175 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.433 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.691 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:52.691 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:52.691 true 00:08:52.691 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:52.691 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.949 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.208 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:53.208 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:53.466 true 00:08:53.466 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:53.466 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.466 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.725 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:53.725 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:53.983 true 00:08:53.983 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:53.983 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.356 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:55.356 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:55.614 true 00:08:55.614 10:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:55.614 10:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.545 10:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.545 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:56.545 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:56.803 true 00:08:56.803 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:56.803 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.803 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.060 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:57.060 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:57.317 true 00:08:57.317 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:57.317 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.829 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:57.829 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:57.829 true 00:08:57.829 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:57.829 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.762 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.019 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:59.019 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:59.019 true 00:08:59.277 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:59.277 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.277 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.535 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:59.535 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:59.794 true 00:08:59.794 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:08:59.794 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.237 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.237 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:01.237 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:01.237 true 00:09:01.237 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:01.237 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.527 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.527 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:01.527 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:01.786 true 00:09:01.786 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:01.786 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.164 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.165 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:03.165 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:03.423 true 00:09:03.423 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:03.423 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.358 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.358 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:04.358 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:04.617 true 00:09:04.617 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:04.617 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.875 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.875 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:04.875 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:05.134 true 00:09:05.134 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:05.134 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.068 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.329 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:06.329 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:06.587 true 00:09:06.587 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:06.587 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.519 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.519 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:07.519 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:07.777 true 00:09:07.777 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:07.777 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.034 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.292 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:08.292 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:08.292 true 00:09:08.292 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:08.292 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.670 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:09.670 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:09.929 true 00:09:09.929 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:09.929 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.865 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.865 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:10.865 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:11.125 true 00:09:11.125 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:11.125 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.385 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.385 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:11.385 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:11.645 true 00:09:11.645 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:11.645 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.020 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:13.020 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:13.277 true 00:09:13.277 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:13.277 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.210 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.210 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:14.210 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:14.468 true 00:09:14.468 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:14.468 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.725 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.984 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:14.984 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:14.984 true 00:09:14.984 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:14.984 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.361 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:16.362 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:16.621 true 00:09:16.621 10:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:16.621 10:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.565 10:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.565 Initializing NVMe Controllers 00:09:17.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.565 Controller IO queue size 128, less than required. 00:09:17.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.565 Controller IO queue size 128, less than required. 00:09:17.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:17.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:17.565 Initialization complete. Launching workers. 00:09:17.565 ======================================================== 00:09:17.565 Latency(us) 00:09:17.565 Device Information : IOPS MiB/s Average min max 00:09:17.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2010.08 0.98 41856.73 2175.00 1045513.01 00:09:17.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17195.39 8.40 7443.50 1526.97 371398.12 00:09:17.565 ======================================================== 00:09:17.565 Total : 19205.47 9.38 11045.25 1526.97 1045513.01 00:09:17.565 00:09:17.565 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:17.565 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:17.823 true 00:09:17.823 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487747 00:09:17.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2487747) - No such process 00:09:17.823 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2487747 00:09:17.823 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.823 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.082 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:18.082 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:18.082 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:18.082 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.082 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:18.341 null0 00:09:18.341 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.341 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.341 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:18.600 null1 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:18.600 null2 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.600 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:18.858 null3 00:09:18.858 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.858 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.858 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:19.118 null4 00:09:19.118 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:19.118 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:19.118 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:19.377 null5 00:09:19.377 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:19.377 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:19.377 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:19.377 null6 00:09:19.377 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:19.377 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:19.377 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:19.637 null7 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:19.637 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2493153 2493155 2493156 2493158 2493160 2493162 2493165 2493168 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.638 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.898 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.157 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.416 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.416 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.416 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.416 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.416 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.417 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.417 10:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.417 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.677 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.936 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.937 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.195 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.453 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.453 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.711 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.712 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.970 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.229 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.488 10:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.488 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.746 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.066 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.325 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.584 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.843 rmmod nvme_tcp 00:09:23.843 rmmod nvme_fabrics 00:09:23.843 rmmod nvme_keyring 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2487381 ']' 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2487381 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2487381 ']' 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2487381 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487381 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487381' 00:09:23.843 killing process with pid 2487381 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2487381 00:09:23.843 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2487381 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.102 10:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.638 00:09:26.638 real 0m47.224s 00:09:26.638 user 3m12.512s 00:09:26.638 sys 0m15.732s 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.638 ************************************ 00:09:26.638 END TEST nvmf_ns_hotplug_stress 00:09:26.638 ************************************ 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.638 ************************************ 00:09:26.638 START TEST nvmf_delete_subsystem 00:09:26.638 ************************************ 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:26.638 * Looking for test storage... 00:09:26.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.638 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.638 --rc genhtml_branch_coverage=1 00:09:26.638 --rc genhtml_function_coverage=1 00:09:26.638 --rc genhtml_legend=1 00:09:26.638 --rc geninfo_all_blocks=1 00:09:26.638 --rc geninfo_unexecuted_blocks=1 00:09:26.638 00:09:26.638 ' 00:09:26.638 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.638 --rc genhtml_branch_coverage=1 00:09:26.638 --rc genhtml_function_coverage=1 00:09:26.638 --rc genhtml_legend=1 00:09:26.639 --rc geninfo_all_blocks=1 00:09:26.639 --rc geninfo_unexecuted_blocks=1 00:09:26.639 00:09:26.639 ' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.639 --rc genhtml_branch_coverage=1 00:09:26.639 --rc genhtml_function_coverage=1 00:09:26.639 --rc genhtml_legend=1 00:09:26.639 --rc geninfo_all_blocks=1 00:09:26.639 --rc geninfo_unexecuted_blocks=1 00:09:26.639 00:09:26.639 ' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.639 --rc genhtml_branch_coverage=1 00:09:26.639 --rc genhtml_function_coverage=1 00:09:26.639 --rc genhtml_legend=1 00:09:26.639 --rc geninfo_all_blocks=1 00:09:26.639 --rc geninfo_unexecuted_blocks=1 00:09:26.639 00:09:26.639 ' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.639 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:33.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:33.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.233 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:33.234 Found net devices under 0000:86:00.0: cvl_0_0 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:33.234 Found net devices under 0000:86:00.1: cvl_0_1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.234 10:20:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:09:33.234 00:09:33.234 --- 10.0.0.2 ping statistics --- 00:09:33.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.234 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:09:33.234 00:09:33.234 --- 10.0.0.1 ping statistics --- 00:09:33.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.234 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2497680 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2497680 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2497680 ']' 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.234 [2024-12-09 10:20:10.113877] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:33.234 [2024-12-09 10:20:10.113930] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.234 [2024-12-09 10:20:10.194384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.234 [2024-12-09 10:20:10.235105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.234 [2024-12-09 10:20:10.235141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.234 [2024-12-09 10:20:10.235151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.234 [2024-12-09 10:20:10.235158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.234 [2024-12-09 10:20:10.235163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.234 [2024-12-09 10:20:10.236311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.234 [2024-12-09 10:20:10.236313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.234 [2024-12-09 10:20:10.380684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.234 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.235 [2024-12-09 10:20:10.400899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.235 NULL1 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.235 Delay0 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2497770 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:33.235 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:33.235 [2024-12-09 10:20:10.511856] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:35.142 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.142 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.142 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 starting I/O failed: -6 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 [2024-12-09 10:20:12.678059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8184a0 is same with the state(6) to be set 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Write completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.142 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 starting I/O failed: -6 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 [2024-12-09 10:20:12.680289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00f0000c40 is same with the state(6) to be set 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:35.143 Read completed with error (sct=0, sc=8) 00:09:35.143 Write completed with error (sct=0, sc=8) 00:09:36.081 [2024-12-09 10:20:13.648628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8199b0 is same with the state(6) to be set 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 [2024-12-09 10:20:13.681835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8182c0 is same with the state(6) to be set 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 [2024-12-09 10:20:13.682014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x818680 is same with the state(6) to be set 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 [2024-12-09 10:20:13.682996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00f000d020 is same with the state(6) to be set 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Write completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 Read completed with error (sct=0, sc=8) 00:09:36.081 [2024-12-09 10:20:13.683518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00f000d800 is same with the state(6) to be set 00:09:36.081 Initializing NVMe Controllers 00:09:36.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.081 Controller IO queue size 128, less than required. 00:09:36.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:36.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:36.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:36.082 Initialization complete. Launching workers. 00:09:36.082 ======================================================== 00:09:36.082 Latency(us) 00:09:36.082 Device Information : IOPS MiB/s Average min max 00:09:36.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.12 0.08 897795.38 331.25 1007743.82 00:09:36.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.65 0.08 964283.55 244.91 2002510.13 00:09:36.082 ======================================================== 00:09:36.082 Total : 330.77 0.16 930289.60 244.91 2002510.13 00:09:36.082 00:09:36.082 [2024-12-09 10:20:13.684063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8199b0 (9): Bad file descriptor 00:09:36.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:36.082 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.082 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:36.082 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2497770 00:09:36.082 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2497770 00:09:36.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2497770) - No such process 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2497770 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2497770 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2497770 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 [2024-12-09 10:20:14.209766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2498399 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:36.650 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.650 [2024-12-09 10:20:14.293686] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:37.218 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.218 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:37.218 10:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.785 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.785 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:37.785 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.047 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.048 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:38.048 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.614 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.614 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:38.614 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.181 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.181 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:39.181 10:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.748 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.748 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:39.748 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.748 Initializing NVMe Controllers 00:09:39.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:39.748 Controller IO queue size 128, less than required. 00:09:39.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:39.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:39.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:39.749 Initialization complete. Launching workers. 00:09:39.749 ======================================================== 00:09:39.749 Latency(us) 00:09:39.749 Device Information : IOPS MiB/s Average min max 00:09:39.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003433.30 1000152.15 1042549.15 00:09:39.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005179.57 1000213.46 1042655.10 00:09:39.749 ======================================================== 00:09:39.749 Total : 256.00 0.12 1004306.43 1000152.15 1042655.10 00:09:39.749 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2498399 00:09:40.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2498399) - No such process 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2498399 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.317 rmmod nvme_tcp 00:09:40.317 rmmod nvme_fabrics 00:09:40.317 rmmod nvme_keyring 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2497680 ']' 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2497680 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2497680 ']' 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2497680 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497680 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497680' 00:09:40.317 killing process with pid 2497680 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2497680 00:09:40.317 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2497680 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:40.317 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.576 10:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.481 00:09:42.481 real 0m16.278s 00:09:42.481 user 0m29.376s 00:09:42.481 sys 0m5.545s 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 ************************************ 00:09:42.481 END TEST nvmf_delete_subsystem 00:09:42.481 ************************************ 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.481 ************************************ 00:09:42.481 START TEST nvmf_host_management 00:09:42.481 ************************************ 00:09:42.481 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:42.740 * Looking for test storage... 00:09:42.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.740 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.741 --rc genhtml_branch_coverage=1 00:09:42.741 --rc genhtml_function_coverage=1 00:09:42.741 --rc genhtml_legend=1 00:09:42.741 --rc geninfo_all_blocks=1 00:09:42.741 --rc geninfo_unexecuted_blocks=1 00:09:42.741 00:09:42.741 ' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.741 --rc genhtml_branch_coverage=1 00:09:42.741 --rc genhtml_function_coverage=1 00:09:42.741 --rc genhtml_legend=1 00:09:42.741 --rc geninfo_all_blocks=1 00:09:42.741 --rc geninfo_unexecuted_blocks=1 00:09:42.741 00:09:42.741 ' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.741 --rc genhtml_branch_coverage=1 00:09:42.741 --rc genhtml_function_coverage=1 00:09:42.741 --rc genhtml_legend=1 00:09:42.741 --rc geninfo_all_blocks=1 00:09:42.741 --rc geninfo_unexecuted_blocks=1 00:09:42.741 00:09:42.741 ' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.741 --rc genhtml_branch_coverage=1 00:09:42.741 --rc genhtml_function_coverage=1 00:09:42.741 --rc genhtml_legend=1 00:09:42.741 --rc geninfo_all_blocks=1 00:09:42.741 --rc geninfo_unexecuted_blocks=1 00:09:42.741 00:09:42.741 ' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.741 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:49.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:49.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:49.311 Found net devices under 0000:86:00.0: cvl_0_0 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:49.311 Found net devices under 0000:86:00.1: cvl_0_1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:09:49.311 00:09:49.311 --- 10.0.0.2 ping statistics --- 00:09:49.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.311 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:49.311 00:09:49.311 --- 10.0.0.1 ping statistics --- 00:09:49.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.311 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:49.311 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2502477 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2502477 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2502477 ']' 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 [2024-12-09 10:20:26.437004] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:49.312 [2024-12-09 10:20:26.437046] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.312 [2024-12-09 10:20:26.515328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.312 [2024-12-09 10:20:26.557664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.312 [2024-12-09 10:20:26.557701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.312 [2024-12-09 10:20:26.557708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.312 [2024-12-09 10:20:26.557714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.312 [2024-12-09 10:20:26.557719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.312 [2024-12-09 10:20:26.559274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.312 [2024-12-09 10:20:26.559382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.312 [2024-12-09 10:20:26.559488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.312 [2024-12-09 10:20:26.559488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 [2024-12-09 10:20:26.697798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 Malloc0 00:09:49.312 [2024-12-09 10:20:26.774911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2502639 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2502639 /var/tmp/bdevperf.sock 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2502639 ']' 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:49.312 { 00:09:49.312 "params": { 00:09:49.312 "name": "Nvme$subsystem", 00:09:49.312 "trtype": "$TEST_TRANSPORT", 00:09:49.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.312 "adrfam": "ipv4", 00:09:49.312 "trsvcid": "$NVMF_PORT", 00:09:49.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.312 "hdgst": ${hdgst:-false}, 00:09:49.312 "ddgst": ${ddgst:-false} 00:09:49.312 }, 00:09:49.312 "method": "bdev_nvme_attach_controller" 00:09:49.312 } 00:09:49.312 EOF 00:09:49.312 )") 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:49.312 10:20:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:49.312 "params": { 00:09:49.312 "name": "Nvme0", 00:09:49.312 "trtype": "tcp", 00:09:49.312 "traddr": "10.0.0.2", 00:09:49.312 "adrfam": "ipv4", 00:09:49.312 "trsvcid": "4420", 00:09:49.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:49.312 "hdgst": false, 00:09:49.312 "ddgst": false 00:09:49.312 }, 00:09:49.312 "method": "bdev_nvme_attach_controller" 00:09:49.312 }' 00:09:49.312 [2024-12-09 10:20:26.872133] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:49.312 [2024-12-09 10:20:26.872181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502639 ] 00:09:49.312 [2024-12-09 10:20:26.951877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.312 [2024-12-09 10:20:26.992612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.572 Running I/O for 10 seconds... 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=82 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 82 -ge 100 ']' 00:09:49.572 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.831 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.091 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 [2024-12-09 10:20:27.589787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.091 [2024-12-09 10:20:27.589849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.091 [2024-12-09 10:20:27.589865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.091 [2024-12-09 10:20:27.589880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.091 [2024-12-09 10:20:27.589895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.091 [2024-12-09 10:20:27.589910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.091 [2024-12-09 10:20:27.589916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.589930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.589960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.589974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.589989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.589997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.092 [2024-12-09 10:20:27.590429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.092 [2024-12-09 10:20:27.590435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.590764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.093 [2024-12-09 10:20:27.590772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.591725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:50.093 task offset: 101248 on job bdev=Nvme0n1 fails 00:09:50.093 00:09:50.093 Latency(us) 00:09:50.093 [2024-12-09T09:20:27.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.093 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:50.093 Job: Nvme0n1 ended in about 0.40 seconds with error 00:09:50.093 Verification LBA range: start 0x0 length 0x400 00:09:50.093 Nvme0n1 : 0.40 1937.58 121.10 161.47 0.00 29661.52 1451.15 27213.04 00:09:50.093 [2024-12-09T09:20:27.817Z] =================================================================================================================== 00:09:50.093 [2024-12-09T09:20:27.817Z] Total : 1937.58 121.10 161.47 0.00 29661.52 1451.15 27213.04 00:09:50.093 [2024-12-09 10:20:27.594090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:50.093 [2024-12-09 10:20:27.594113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd07120 (9): Bad file descriptor 00:09:50.093 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.093 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:50.093 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.093 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.093 [2024-12-09 10:20:27.597402] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:50.093 [2024-12-09 10:20:27.597486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:50.093 [2024-12-09 10:20:27.597508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.093 [2024-12-09 10:20:27.597523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:50.094 [2024-12-09 10:20:27.597532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:50.094 [2024-12-09 10:20:27.597539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:50.094 [2024-12-09 10:20:27.597545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd07120 00:09:50.094 [2024-12-09 10:20:27.597564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd07120 (9): Bad file descriptor 00:09:50.094 [2024-12-09 10:20:27.597575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:09:50.094 [2024-12-09 10:20:27.597582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:09:50.094 [2024-12-09 10:20:27.597590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:09:50.094 [2024-12-09 10:20:27.597598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:09:50.094 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.094 10:20:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2502639 00:09:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2502639) - No such process 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:51.031 { 00:09:51.031 "params": { 00:09:51.031 "name": "Nvme$subsystem", 00:09:51.031 "trtype": "$TEST_TRANSPORT", 00:09:51.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.031 "adrfam": "ipv4", 00:09:51.031 "trsvcid": "$NVMF_PORT", 00:09:51.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.031 "hdgst": ${hdgst:-false}, 00:09:51.031 "ddgst": ${ddgst:-false} 00:09:51.031 }, 00:09:51.031 "method": "bdev_nvme_attach_controller" 00:09:51.031 } 00:09:51.031 EOF 00:09:51.031 )") 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:51.031 10:20:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:51.031 "params": { 00:09:51.031 "name": "Nvme0", 00:09:51.031 "trtype": "tcp", 00:09:51.031 "traddr": "10.0.0.2", 00:09:51.031 "adrfam": "ipv4", 00:09:51.031 "trsvcid": "4420", 00:09:51.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:51.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:51.031 "hdgst": false, 00:09:51.031 "ddgst": false 00:09:51.031 }, 00:09:51.031 "method": "bdev_nvme_attach_controller" 00:09:51.031 }' 00:09:51.031 [2024-12-09 10:20:28.661267] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:51.031 [2024-12-09 10:20:28.661313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502992 ] 00:09:51.031 [2024-12-09 10:20:28.737861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.305 [2024-12-09 10:20:28.777336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.564 Running I/O for 1 seconds... 00:09:52.500 1984.00 IOPS, 124.00 MiB/s 00:09:52.500 Latency(us) 00:09:52.500 [2024-12-09T09:20:30.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:52.500 Verification LBA range: start 0x0 length 0x400 00:09:52.500 Nvme0n1 : 1.01 2022.43 126.40 0.00 0.00 31157.03 4774.77 26838.55 00:09:52.500 [2024-12-09T09:20:30.224Z] =================================================================================================================== 00:09:52.500 [2024-12-09T09:20:30.224Z] Total : 2022.43 126.40 0.00 0.00 31157.03 4774.77 26838.55 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.758 rmmod nvme_tcp 00:09:52.758 rmmod nvme_fabrics 00:09:52.758 rmmod nvme_keyring 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2502477 ']' 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2502477 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2502477 ']' 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2502477 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2502477 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2502477' 00:09:52.758 killing process with pid 2502477 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2502477 00:09:52.758 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2502477 00:09:53.018 [2024-12-09 10:20:30.518375] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.018 10:20:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:54.934 00:09:54.934 real 0m12.432s 00:09:54.934 user 0m19.878s 00:09:54.934 sys 0m5.540s 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:54.934 ************************************ 00:09:54.934 END TEST nvmf_host_management 00:09:54.934 ************************************ 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.934 10:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.213 ************************************ 00:09:55.213 START TEST nvmf_lvol 00:09:55.213 ************************************ 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:55.213 * Looking for test storage... 00:09:55.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.213 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.214 --rc genhtml_branch_coverage=1 00:09:55.214 --rc genhtml_function_coverage=1 00:09:55.214 --rc genhtml_legend=1 00:09:55.214 --rc geninfo_all_blocks=1 00:09:55.214 --rc geninfo_unexecuted_blocks=1 00:09:55.214 00:09:55.214 ' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.214 --rc genhtml_branch_coverage=1 00:09:55.214 --rc genhtml_function_coverage=1 00:09:55.214 --rc genhtml_legend=1 00:09:55.214 --rc geninfo_all_blocks=1 00:09:55.214 --rc geninfo_unexecuted_blocks=1 00:09:55.214 00:09:55.214 ' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.214 --rc genhtml_branch_coverage=1 00:09:55.214 --rc genhtml_function_coverage=1 00:09:55.214 --rc genhtml_legend=1 00:09:55.214 --rc geninfo_all_blocks=1 00:09:55.214 --rc geninfo_unexecuted_blocks=1 00:09:55.214 00:09:55.214 ' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.214 --rc genhtml_branch_coverage=1 00:09:55.214 --rc genhtml_function_coverage=1 00:09:55.214 --rc genhtml_legend=1 00:09:55.214 --rc geninfo_all_blocks=1 00:09:55.214 --rc geninfo_unexecuted_blocks=1 00:09:55.214 00:09:55.214 ' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.214 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.215 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.215 10:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:01.829 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:01.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:01.829 Found net devices under 0000:86:00.0: cvl_0_0 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:01.829 Found net devices under 0000:86:00.1: cvl_0_1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:10:01.829 00:10:01.829 --- 10.0.0.2 ping statistics --- 00:10:01.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.829 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:01.829 00:10:01.829 --- 10.0.0.1 ping statistics --- 00:10:01.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.829 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.829 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2506795 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2506795 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2506795 ']' 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.830 10:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.830 [2024-12-09 10:20:38.961669] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:01.830 [2024-12-09 10:20:38.961708] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.830 [2024-12-09 10:20:39.040010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.830 [2024-12-09 10:20:39.081756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.830 [2024-12-09 10:20:39.081790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.830 [2024-12-09 10:20:39.081798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.830 [2024-12-09 10:20:39.081804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.830 [2024-12-09 10:20:39.081814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.830 [2024-12-09 10:20:39.083100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.830 [2024-12-09 10:20:39.083205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.830 [2024-12-09 10:20:39.083206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.087 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.087 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:02.087 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.087 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.087 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:02.344 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.344 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.344 [2024-12-09 10:20:39.987484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.344 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.602 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:02.602 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.860 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:02.860 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:03.119 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:03.377 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c6464072-1f0f-460e-9982-d4862f17621a 00:10:03.377 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6464072-1f0f-460e-9982-d4862f17621a lvol 20 00:10:03.377 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3861a05d-e77b-4b18-afaf-7c1102b1d969 00:10:03.377 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:03.634 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3861a05d-e77b-4b18-afaf-7c1102b1d969 00:10:03.892 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:04.150 [2024-12-09 10:20:41.619889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.150 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.150 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2507297 00:10:04.150 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:04.150 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:05.526 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3861a05d-e77b-4b18-afaf-7c1102b1d969 MY_SNAPSHOT 00:10:05.526 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=85a90d07-c728-4081-97ff-b18cfa5d30da 00:10:05.526 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3861a05d-e77b-4b18-afaf-7c1102b1d969 30 00:10:05.785 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 85a90d07-c728-4081-97ff-b18cfa5d30da MY_CLONE 00:10:06.044 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d0cb5a4f-b04b-4cb1-acf5-acab4b9d4cc8 00:10:06.044 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d0cb5a4f-b04b-4cb1-acf5-acab4b9d4cc8 00:10:06.612 10:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2507297 00:10:14.721 Initializing NVMe Controllers 00:10:14.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:14.721 Controller IO queue size 128, less than required. 00:10:14.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:14.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:14.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:14.721 Initialization complete. Launching workers. 00:10:14.721 ======================================================== 00:10:14.721 Latency(us) 00:10:14.721 Device Information : IOPS MiB/s Average min max 00:10:14.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12245.60 47.83 10452.93 1603.46 53345.66 00:10:14.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12335.80 48.19 10380.78 3275.89 71445.79 00:10:14.721 ======================================================== 00:10:14.721 Total : 24581.40 96.02 10416.72 1603.46 71445.79 00:10:14.721 00:10:14.721 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:14.980 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3861a05d-e77b-4b18-afaf-7c1102b1d969 00:10:14.980 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6464072-1f0f-460e-9982-d4862f17621a 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.239 rmmod nvme_tcp 00:10:15.239 rmmod nvme_fabrics 00:10:15.239 rmmod nvme_keyring 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2506795 ']' 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2506795 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2506795 ']' 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2506795 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.239 10:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506795 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506795' 00:10:15.498 killing process with pid 2506795 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2506795 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2506795 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.498 10:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.035 00:10:18.035 real 0m22.598s 00:10:18.035 user 1m4.986s 00:10:18.035 sys 0m7.838s 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 ************************************ 00:10:18.035 END TEST nvmf_lvol 00:10:18.035 ************************************ 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.035 ************************************ 00:10:18.035 START TEST nvmf_lvs_grow 00:10:18.035 ************************************ 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:18.035 * Looking for test storage... 00:10:18.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.035 --rc genhtml_branch_coverage=1 00:10:18.035 --rc genhtml_function_coverage=1 00:10:18.035 --rc genhtml_legend=1 00:10:18.035 --rc geninfo_all_blocks=1 00:10:18.035 --rc geninfo_unexecuted_blocks=1 00:10:18.035 00:10:18.035 ' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.035 --rc genhtml_branch_coverage=1 00:10:18.035 --rc genhtml_function_coverage=1 00:10:18.035 --rc genhtml_legend=1 00:10:18.035 --rc geninfo_all_blocks=1 00:10:18.035 --rc geninfo_unexecuted_blocks=1 00:10:18.035 00:10:18.035 ' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.035 --rc genhtml_branch_coverage=1 00:10:18.035 --rc genhtml_function_coverage=1 00:10:18.035 --rc genhtml_legend=1 00:10:18.035 --rc geninfo_all_blocks=1 00:10:18.035 --rc geninfo_unexecuted_blocks=1 00:10:18.035 00:10:18.035 ' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.035 --rc genhtml_branch_coverage=1 00:10:18.035 --rc genhtml_function_coverage=1 00:10:18.035 --rc genhtml_legend=1 00:10:18.035 --rc geninfo_all_blocks=1 00:10:18.035 --rc geninfo_unexecuted_blocks=1 00:10:18.035 00:10:18.035 ' 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.035 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.036 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:24.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:24.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:24.601 Found net devices under 0000:86:00.0: cvl_0_0 00:10:24.601 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:24.602 Found net devices under 0000:86:00.1: cvl_0_1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:10:24.602 00:10:24.602 --- 10.0.0.2 ping statistics --- 00:10:24.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.602 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:10:24.602 00:10:24.602 --- 10.0.0.1 ping statistics --- 00:10:24.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.602 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2512858 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2512858 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2512858 ']' 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 [2024-12-09 10:21:01.665315] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:24.602 [2024-12-09 10:21:01.665361] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.602 [2024-12-09 10:21:01.742709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.602 [2024-12-09 10:21:01.783495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.602 [2024-12-09 10:21:01.783531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.602 [2024-12-09 10:21:01.783538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.602 [2024-12-09 10:21:01.783545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.602 [2024-12-09 10:21:01.783551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.602 [2024-12-09 10:21:01.784138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.602 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.602 [2024-12-09 10:21:02.097599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 ************************************ 00:10:24.602 START TEST lvs_grow_clean 00:10:24.602 ************************************ 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:24.602 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:24.861 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:24.861 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:25.119 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1af4768-1d39-4808-87e7-dcf58cf668ec lvol 150 00:10:25.377 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9ea73f77-ef72-45df-ae57-0528c0194722 00:10:25.377 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:25.377 10:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:25.635 [2024-12-09 10:21:03.140718] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:25.635 [2024-12-09 10:21:03.140771] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:25.635 true 00:10:25.635 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:25.635 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:25.636 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:25.636 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:25.894 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9ea73f77-ef72-45df-ae57-0528c0194722 00:10:26.153 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:26.412 [2024-12-09 10:21:03.878980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.412 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2513304 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2513304 /var/tmp/bdevperf.sock 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2513304 ']' 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:26.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.412 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:26.412 [2024-12-09 10:21:04.113572] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:26.412 [2024-12-09 10:21:04.113621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513304 ] 00:10:26.670 [2024-12-09 10:21:04.188141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.670 [2024-12-09 10:21:04.228930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.670 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.670 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:26.670 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:26.928 Nvme0n1 00:10:26.928 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:27.186 [ 00:10:27.186 { 00:10:27.186 "name": "Nvme0n1", 00:10:27.186 "aliases": [ 00:10:27.186 "9ea73f77-ef72-45df-ae57-0528c0194722" 00:10:27.186 ], 00:10:27.186 "product_name": "NVMe disk", 00:10:27.186 "block_size": 4096, 00:10:27.186 "num_blocks": 38912, 00:10:27.186 "uuid": "9ea73f77-ef72-45df-ae57-0528c0194722", 00:10:27.186 "numa_id": 1, 00:10:27.186 "assigned_rate_limits": { 00:10:27.186 "rw_ios_per_sec": 0, 00:10:27.186 "rw_mbytes_per_sec": 0, 00:10:27.186 "r_mbytes_per_sec": 0, 00:10:27.186 "w_mbytes_per_sec": 0 00:10:27.186 }, 00:10:27.186 "claimed": false, 00:10:27.186 "zoned": false, 00:10:27.186 "supported_io_types": { 00:10:27.186 "read": true, 00:10:27.186 "write": true, 00:10:27.186 "unmap": true, 00:10:27.186 "flush": true, 00:10:27.186 "reset": true, 00:10:27.186 "nvme_admin": true, 00:10:27.186 "nvme_io": true, 00:10:27.186 "nvme_io_md": false, 00:10:27.186 "write_zeroes": true, 00:10:27.186 "zcopy": false, 00:10:27.186 "get_zone_info": false, 00:10:27.186 "zone_management": false, 00:10:27.186 "zone_append": false, 00:10:27.186 "compare": true, 00:10:27.186 "compare_and_write": true, 00:10:27.186 "abort": true, 00:10:27.186 "seek_hole": false, 00:10:27.186 "seek_data": false, 00:10:27.186 "copy": true, 00:10:27.186 "nvme_iov_md": false 00:10:27.186 }, 00:10:27.186 "memory_domains": [ 00:10:27.186 { 00:10:27.186 "dma_device_id": "system", 00:10:27.186 "dma_device_type": 1 00:10:27.186 } 00:10:27.186 ], 00:10:27.186 "driver_specific": { 00:10:27.186 "nvme": [ 00:10:27.186 { 00:10:27.186 "trid": { 00:10:27.186 "trtype": "TCP", 00:10:27.186 "adrfam": "IPv4", 00:10:27.186 "traddr": "10.0.0.2", 00:10:27.186 "trsvcid": "4420", 00:10:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:27.186 }, 00:10:27.186 "ctrlr_data": { 00:10:27.186 "cntlid": 1, 00:10:27.186 "vendor_id": "0x8086", 00:10:27.186 "model_number": "SPDK bdev Controller", 00:10:27.186 "serial_number": "SPDK0", 00:10:27.186 "firmware_revision": "25.01", 00:10:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:27.186 "oacs": { 00:10:27.186 "security": 0, 00:10:27.186 "format": 0, 00:10:27.186 "firmware": 0, 00:10:27.186 "ns_manage": 0 00:10:27.186 }, 00:10:27.186 "multi_ctrlr": true, 00:10:27.186 "ana_reporting": false 00:10:27.186 }, 00:10:27.186 "vs": { 00:10:27.186 "nvme_version": "1.3" 00:10:27.186 }, 00:10:27.186 "ns_data": { 00:10:27.186 "id": 1, 00:10:27.186 "can_share": true 00:10:27.186 } 00:10:27.186 } 00:10:27.186 ], 00:10:27.186 "mp_policy": "active_passive" 00:10:27.186 } 00:10:27.186 } 00:10:27.186 ] 00:10:27.186 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2513534 00:10:27.186 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:27.187 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:27.187 Running I/O for 10 seconds... 00:10:28.560 Latency(us) 00:10:28.560 [2024-12-09T09:21:06.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.560 Nvme0n1 : 1.00 23516.00 91.86 0.00 0.00 0.00 0.00 0.00 00:10:28.560 [2024-12-09T09:21:06.284Z] =================================================================================================================== 00:10:28.560 [2024-12-09T09:21:06.284Z] Total : 23516.00 91.86 0.00 0.00 0.00 0.00 0.00 00:10:28.560 00:10:29.126 10:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:29.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.383 Nvme0n1 : 2.00 23728.50 92.69 0.00 0.00 0.00 0.00 0.00 00:10:29.383 [2024-12-09T09:21:07.107Z] =================================================================================================================== 00:10:29.383 [2024-12-09T09:21:07.107Z] Total : 23728.50 92.69 0.00 0.00 0.00 0.00 0.00 00:10:29.383 00:10:29.383 true 00:10:29.383 10:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:29.383 10:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:29.641 10:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:29.641 10:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:29.641 10:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2513534 00:10:30.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.207 Nvme0n1 : 3.00 23779.33 92.89 0.00 0.00 0.00 0.00 0.00 00:10:30.207 [2024-12-09T09:21:07.931Z] =================================================================================================================== 00:10:30.207 [2024-12-09T09:21:07.931Z] Total : 23779.33 92.89 0.00 0.00 0.00 0.00 0.00 00:10:30.207 00:10:31.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.580 Nvme0n1 : 4.00 23754.25 92.79 0.00 0.00 0.00 0.00 0.00 00:10:31.580 [2024-12-09T09:21:09.304Z] =================================================================================================================== 00:10:31.580 [2024-12-09T09:21:09.304Z] Total : 23754.25 92.79 0.00 0.00 0.00 0.00 0.00 00:10:31.580 00:10:32.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.517 Nvme0n1 : 5.00 23808.80 93.00 0.00 0.00 0.00 0.00 0.00 00:10:32.517 [2024-12-09T09:21:10.241Z] =================================================================================================================== 00:10:32.517 [2024-12-09T09:21:10.241Z] Total : 23808.80 93.00 0.00 0.00 0.00 0.00 0.00 00:10:32.517 00:10:33.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.454 Nvme0n1 : 6.00 23804.33 92.99 0.00 0.00 0.00 0.00 0.00 00:10:33.454 [2024-12-09T09:21:11.178Z] =================================================================================================================== 00:10:33.454 [2024-12-09T09:21:11.178Z] Total : 23804.33 92.99 0.00 0.00 0.00 0.00 0.00 00:10:33.454 00:10:34.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.389 Nvme0n1 : 7.00 23843.86 93.14 0.00 0.00 0.00 0.00 0.00 00:10:34.389 [2024-12-09T09:21:12.113Z] =================================================================================================================== 00:10:34.389 [2024-12-09T09:21:12.113Z] Total : 23843.86 93.14 0.00 0.00 0.00 0.00 0.00 00:10:34.389 00:10:35.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.321 Nvme0n1 : 8.00 23881.00 93.29 0.00 0.00 0.00 0.00 0.00 00:10:35.321 [2024-12-09T09:21:13.045Z] =================================================================================================================== 00:10:35.321 [2024-12-09T09:21:13.045Z] Total : 23881.00 93.29 0.00 0.00 0.00 0.00 0.00 00:10:35.321 00:10:36.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.265 Nvme0n1 : 9.00 23911.00 93.40 0.00 0.00 0.00 0.00 0.00 00:10:36.265 [2024-12-09T09:21:13.989Z] =================================================================================================================== 00:10:36.265 [2024-12-09T09:21:13.989Z] Total : 23911.00 93.40 0.00 0.00 0.00 0.00 0.00 00:10:36.265 00:10:37.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.199 Nvme0n1 : 10.00 23933.10 93.49 0.00 0.00 0.00 0.00 0.00 00:10:37.199 [2024-12-09T09:21:14.923Z] =================================================================================================================== 00:10:37.199 [2024-12-09T09:21:14.923Z] Total : 23933.10 93.49 0.00 0.00 0.00 0.00 0.00 00:10:37.199 00:10:37.199 00:10:37.199 Latency(us) 00:10:37.199 [2024-12-09T09:21:14.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.199 Nvme0n1 : 10.00 23938.63 93.51 0.00 0.00 5344.26 3151.97 11047.50 00:10:37.199 [2024-12-09T09:21:14.923Z] =================================================================================================================== 00:10:37.199 [2024-12-09T09:21:14.923Z] Total : 23938.63 93.51 0.00 0.00 5344.26 3151.97 11047.50 00:10:37.199 { 00:10:37.199 "results": [ 00:10:37.199 { 00:10:37.199 "job": "Nvme0n1", 00:10:37.199 "core_mask": "0x2", 00:10:37.199 "workload": "randwrite", 00:10:37.199 "status": "finished", 00:10:37.199 "queue_depth": 128, 00:10:37.199 "io_size": 4096, 00:10:37.199 "runtime": 10.003039, 00:10:37.199 "iops": 23938.625051846742, 00:10:37.199 "mibps": 93.51025410877634, 00:10:37.199 "io_failed": 0, 00:10:37.199 "io_timeout": 0, 00:10:37.199 "avg_latency_us": 5344.264165504821, 00:10:37.199 "min_latency_us": 3151.9695238095237, 00:10:37.199 "max_latency_us": 11047.497142857143 00:10:37.199 } 00:10:37.199 ], 00:10:37.199 "core_count": 1 00:10:37.199 } 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2513304 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2513304 ']' 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2513304 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.199 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2513304 00:10:37.458 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:37.458 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:37.458 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2513304' 00:10:37.458 killing process with pid 2513304 00:10:37.458 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2513304 00:10:37.458 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.458 00:10:37.458 Latency(us) 00:10:37.458 [2024-12-09T09:21:15.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.458 [2024-12-09T09:21:15.182Z] =================================================================================================================== 00:10:37.458 [2024-12-09T09:21:15.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:37.458 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2513304 00:10:37.458 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:37.716 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.974 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:37.974 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:38.232 [2024-12-09 10:21:15.860436] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:38.232 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:38.491 request: 00:10:38.491 { 00:10:38.491 "uuid": "c1af4768-1d39-4808-87e7-dcf58cf668ec", 00:10:38.491 "method": "bdev_lvol_get_lvstores", 00:10:38.491 "req_id": 1 00:10:38.491 } 00:10:38.491 Got JSON-RPC error response 00:10:38.491 response: 00:10:38.491 { 00:10:38.491 "code": -19, 00:10:38.491 "message": "No such device" 00:10:38.491 } 00:10:38.491 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:38.491 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.491 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:38.491 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.491 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:38.750 aio_bdev 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9ea73f77-ef72-45df-ae57-0528c0194722 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9ea73f77-ef72-45df-ae57-0528c0194722 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:38.750 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9ea73f77-ef72-45df-ae57-0528c0194722 -t 2000 00:10:39.009 [ 00:10:39.009 { 00:10:39.009 "name": "9ea73f77-ef72-45df-ae57-0528c0194722", 00:10:39.009 "aliases": [ 00:10:39.009 "lvs/lvol" 00:10:39.009 ], 00:10:39.009 "product_name": "Logical Volume", 00:10:39.009 "block_size": 4096, 00:10:39.009 "num_blocks": 38912, 00:10:39.009 "uuid": "9ea73f77-ef72-45df-ae57-0528c0194722", 00:10:39.009 "assigned_rate_limits": { 00:10:39.009 "rw_ios_per_sec": 0, 00:10:39.009 "rw_mbytes_per_sec": 0, 00:10:39.009 "r_mbytes_per_sec": 0, 00:10:39.009 "w_mbytes_per_sec": 0 00:10:39.009 }, 00:10:39.009 "claimed": false, 00:10:39.009 "zoned": false, 00:10:39.009 "supported_io_types": { 00:10:39.009 "read": true, 00:10:39.009 "write": true, 00:10:39.009 "unmap": true, 00:10:39.009 "flush": false, 00:10:39.009 "reset": true, 00:10:39.009 "nvme_admin": false, 00:10:39.009 "nvme_io": false, 00:10:39.009 "nvme_io_md": false, 00:10:39.009 "write_zeroes": true, 00:10:39.009 "zcopy": false, 00:10:39.009 "get_zone_info": false, 00:10:39.009 "zone_management": false, 00:10:39.009 "zone_append": false, 00:10:39.009 "compare": false, 00:10:39.009 "compare_and_write": false, 00:10:39.009 "abort": false, 00:10:39.009 "seek_hole": true, 00:10:39.009 "seek_data": true, 00:10:39.009 "copy": false, 00:10:39.009 "nvme_iov_md": false 00:10:39.009 }, 00:10:39.009 "driver_specific": { 00:10:39.009 "lvol": { 00:10:39.009 "lvol_store_uuid": "c1af4768-1d39-4808-87e7-dcf58cf668ec", 00:10:39.009 "base_bdev": "aio_bdev", 00:10:39.009 "thin_provision": false, 00:10:39.009 "num_allocated_clusters": 38, 00:10:39.009 "snapshot": false, 00:10:39.009 "clone": false, 00:10:39.009 "esnap_clone": false 00:10:39.009 } 00:10:39.009 } 00:10:39.009 } 00:10:39.009 ] 00:10:39.009 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:39.009 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:39.009 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:39.268 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:39.269 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:39.269 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:39.527 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:39.527 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9ea73f77-ef72-45df-ae57-0528c0194722 00:10:39.527 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1af4768-1d39-4808-87e7-dcf58cf668ec 00:10:39.786 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:40.046 00:10:40.046 real 0m15.455s 00:10:40.046 user 0m14.952s 00:10:40.046 sys 0m1.516s 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 ************************************ 00:10:40.046 END TEST lvs_grow_clean 00:10:40.046 ************************************ 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 ************************************ 00:10:40.046 START TEST lvs_grow_dirty 00:10:40.046 ************************************ 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:40.046 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:40.305 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:40.305 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:40.563 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e5898b6c-93f1-4920-9227-dbcfab77ba4f lvol 150 00:10:40.822 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:40.822 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:40.822 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:41.082 [2024-12-09 10:21:18.628677] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:41.082 [2024-12-09 10:21:18.628727] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:41.082 true 00:10:41.082 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:41.082 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:41.341 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:41.341 10:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:41.341 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:41.600 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:41.859 [2024-12-09 10:21:19.346804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2516294 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2516294 /var/tmp/bdevperf.sock 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2516294 ']' 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:41.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.859 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:42.118 [2024-12-09 10:21:19.596033] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:42.118 [2024-12-09 10:21:19.596078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516294 ] 00:10:42.118 [2024-12-09 10:21:19.671683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.118 [2024-12-09 10:21:19.713933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.118 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.118 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:42.118 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:42.685 Nvme0n1 00:10:42.685 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:42.685 [ 00:10:42.685 { 00:10:42.685 "name": "Nvme0n1", 00:10:42.685 "aliases": [ 00:10:42.685 "260a52ca-0a9a-4819-98f1-c8577f6f6868" 00:10:42.685 ], 00:10:42.685 "product_name": "NVMe disk", 00:10:42.685 "block_size": 4096, 00:10:42.685 "num_blocks": 38912, 00:10:42.685 "uuid": "260a52ca-0a9a-4819-98f1-c8577f6f6868", 00:10:42.685 "numa_id": 1, 00:10:42.685 "assigned_rate_limits": { 00:10:42.685 "rw_ios_per_sec": 0, 00:10:42.685 "rw_mbytes_per_sec": 0, 00:10:42.685 "r_mbytes_per_sec": 0, 00:10:42.685 "w_mbytes_per_sec": 0 00:10:42.685 }, 00:10:42.685 "claimed": false, 00:10:42.685 "zoned": false, 00:10:42.685 "supported_io_types": { 00:10:42.685 "read": true, 00:10:42.685 "write": true, 00:10:42.685 "unmap": true, 00:10:42.685 "flush": true, 00:10:42.685 "reset": true, 00:10:42.685 "nvme_admin": true, 00:10:42.685 "nvme_io": true, 00:10:42.685 "nvme_io_md": false, 00:10:42.685 "write_zeroes": true, 00:10:42.685 "zcopy": false, 00:10:42.685 "get_zone_info": false, 00:10:42.685 "zone_management": false, 00:10:42.685 "zone_append": false, 00:10:42.685 "compare": true, 00:10:42.685 "compare_and_write": true, 00:10:42.685 "abort": true, 00:10:42.685 "seek_hole": false, 00:10:42.685 "seek_data": false, 00:10:42.685 "copy": true, 00:10:42.685 "nvme_iov_md": false 00:10:42.685 }, 00:10:42.685 "memory_domains": [ 00:10:42.685 { 00:10:42.685 "dma_device_id": "system", 00:10:42.685 "dma_device_type": 1 00:10:42.685 } 00:10:42.685 ], 00:10:42.685 "driver_specific": { 00:10:42.685 "nvme": [ 00:10:42.685 { 00:10:42.685 "trid": { 00:10:42.685 "trtype": "TCP", 00:10:42.685 "adrfam": "IPv4", 00:10:42.685 "traddr": "10.0.0.2", 00:10:42.685 "trsvcid": "4420", 00:10:42.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:42.685 }, 00:10:42.685 "ctrlr_data": { 00:10:42.685 "cntlid": 1, 00:10:42.685 "vendor_id": "0x8086", 00:10:42.685 "model_number": "SPDK bdev Controller", 00:10:42.686 "serial_number": "SPDK0", 00:10:42.686 "firmware_revision": "25.01", 00:10:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.686 "oacs": { 00:10:42.686 "security": 0, 00:10:42.686 "format": 0, 00:10:42.686 "firmware": 0, 00:10:42.686 "ns_manage": 0 00:10:42.686 }, 00:10:42.686 "multi_ctrlr": true, 00:10:42.686 "ana_reporting": false 00:10:42.686 }, 00:10:42.686 "vs": { 00:10:42.686 "nvme_version": "1.3" 00:10:42.686 }, 00:10:42.686 "ns_data": { 00:10:42.686 "id": 1, 00:10:42.686 "can_share": true 00:10:42.686 } 00:10:42.686 } 00:10:42.686 ], 00:10:42.686 "mp_policy": "active_passive" 00:10:42.686 } 00:10:42.686 } 00:10:42.686 ] 00:10:42.944 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2516521 00:10:42.944 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:42.944 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:42.944 Running I/O for 10 seconds... 00:10:43.878 Latency(us) 00:10:43.878 [2024-12-09T09:21:21.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.878 Nvme0n1 : 1.00 23671.00 92.46 0.00 0.00 0.00 0.00 0.00 00:10:43.878 [2024-12-09T09:21:21.602Z] =================================================================================================================== 00:10:43.878 [2024-12-09T09:21:21.602Z] Total : 23671.00 92.46 0.00 0.00 0.00 0.00 0.00 00:10:43.878 00:10:44.812 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:44.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.812 Nvme0n1 : 2.00 23777.00 92.88 0.00 0.00 0.00 0.00 0.00 00:10:44.812 [2024-12-09T09:21:22.536Z] =================================================================================================================== 00:10:44.812 [2024-12-09T09:21:22.536Z] Total : 23777.00 92.88 0.00 0.00 0.00 0.00 0.00 00:10:44.812 00:10:45.069 true 00:10:45.070 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:45.070 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:45.328 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:45.328 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:45.328 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2516521 00:10:45.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.893 Nvme0n1 : 3.00 23818.33 93.04 0.00 0.00 0.00 0.00 0.00 00:10:45.893 [2024-12-09T09:21:23.617Z] =================================================================================================================== 00:10:45.893 [2024-12-09T09:21:23.617Z] Total : 23818.33 93.04 0.00 0.00 0.00 0.00 0.00 00:10:45.893 00:10:46.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.827 Nvme0n1 : 4.00 23852.25 93.17 0.00 0.00 0.00 0.00 0.00 00:10:46.827 [2024-12-09T09:21:24.551Z] =================================================================================================================== 00:10:46.827 [2024-12-09T09:21:24.551Z] Total : 23852.25 93.17 0.00 0.00 0.00 0.00 0.00 00:10:46.827 00:10:47.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.881 Nvme0n1 : 5.00 23870.60 93.24 0.00 0.00 0.00 0.00 0.00 00:10:47.881 [2024-12-09T09:21:25.605Z] =================================================================================================================== 00:10:47.881 [2024-12-09T09:21:25.605Z] Total : 23870.60 93.24 0.00 0.00 0.00 0.00 0.00 00:10:47.881 00:10:48.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.817 Nvme0n1 : 6.00 23918.83 93.43 0.00 0.00 0.00 0.00 0.00 00:10:48.817 [2024-12-09T09:21:26.541Z] =================================================================================================================== 00:10:48.817 [2024-12-09T09:21:26.541Z] Total : 23918.83 93.43 0.00 0.00 0.00 0.00 0.00 00:10:48.817 00:10:50.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.195 Nvme0n1 : 7.00 23944.00 93.53 0.00 0.00 0.00 0.00 0.00 00:10:50.195 [2024-12-09T09:21:27.919Z] =================================================================================================================== 00:10:50.195 [2024-12-09T09:21:27.920Z] Total : 23944.00 93.53 0.00 0.00 0.00 0.00 0.00 00:10:50.196 00:10:51.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.133 Nvme0n1 : 8.00 23975.75 93.66 0.00 0.00 0.00 0.00 0.00 00:10:51.133 [2024-12-09T09:21:28.857Z] =================================================================================================================== 00:10:51.133 [2024-12-09T09:21:28.857Z] Total : 23975.75 93.66 0.00 0.00 0.00 0.00 0.00 00:10:51.133 00:10:52.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.067 Nvme0n1 : 9.00 24002.00 93.76 0.00 0.00 0.00 0.00 0.00 00:10:52.067 [2024-12-09T09:21:29.791Z] =================================================================================================================== 00:10:52.067 [2024-12-09T09:21:29.791Z] Total : 24002.00 93.76 0.00 0.00 0.00 0.00 0.00 00:10:52.067 00:10:53.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.000 Nvme0n1 : 10.00 23990.40 93.71 0.00 0.00 0.00 0.00 0.00 00:10:53.000 [2024-12-09T09:21:30.724Z] =================================================================================================================== 00:10:53.000 [2024-12-09T09:21:30.724Z] Total : 23990.40 93.71 0.00 0.00 0.00 0.00 0.00 00:10:53.000 00:10:53.000 00:10:53.000 Latency(us) 00:10:53.000 [2024-12-09T09:21:30.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.000 Nvme0n1 : 10.00 23996.18 93.74 0.00 0.00 5331.27 2246.95 11359.57 00:10:53.000 [2024-12-09T09:21:30.724Z] =================================================================================================================== 00:10:53.000 [2024-12-09T09:21:30.725Z] Total : 23996.18 93.74 0.00 0.00 5331.27 2246.95 11359.57 00:10:53.001 { 00:10:53.001 "results": [ 00:10:53.001 { 00:10:53.001 "job": "Nvme0n1", 00:10:53.001 "core_mask": "0x2", 00:10:53.001 "workload": "randwrite", 00:10:53.001 "status": "finished", 00:10:53.001 "queue_depth": 128, 00:10:53.001 "io_size": 4096, 00:10:53.001 "runtime": 10.002926, 00:10:53.001 "iops": 23996.178718107083, 00:10:53.001 "mibps": 93.7350731176058, 00:10:53.001 "io_failed": 0, 00:10:53.001 "io_timeout": 0, 00:10:53.001 "avg_latency_us": 5331.2700046343025, 00:10:53.001 "min_latency_us": 2246.9485714285715, 00:10:53.001 "max_latency_us": 11359.573333333334 00:10:53.001 } 00:10:53.001 ], 00:10:53.001 "core_count": 1 00:10:53.001 } 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2516294 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2516294 ']' 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2516294 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516294 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516294' 00:10:53.001 killing process with pid 2516294 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2516294 00:10:53.001 Received shutdown signal, test time was about 10.000000 seconds 00:10:53.001 00:10:53.001 Latency(us) 00:10:53.001 [2024-12-09T09:21:30.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.001 [2024-12-09T09:21:30.725Z] =================================================================================================================== 00:10:53.001 [2024-12-09T09:21:30.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:53.001 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2516294 00:10:53.258 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.258 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:53.515 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:53.515 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2512858 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2512858 00:10:53.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2512858 Killed "${NVMF_APP[@]}" "$@" 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2518376 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2518376 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2518376 ']' 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.772 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:53.772 [2024-12-09 10:21:31.462325] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:53.772 [2024-12-09 10:21:31.462371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.029 [2024-12-09 10:21:31.540381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.029 [2024-12-09 10:21:31.580746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.029 [2024-12-09 10:21:31.580779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.029 [2024-12-09 10:21:31.580786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.029 [2024-12-09 10:21:31.580792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.029 [2024-12-09 10:21:31.580797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.029 [2024-12-09 10:21:31.581357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.029 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:54.286 [2024-12-09 10:21:31.890901] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:54.286 [2024-12-09 10:21:31.890985] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:54.286 [2024-12-09 10:21:31.891010] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.286 10:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:54.543 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 260a52ca-0a9a-4819-98f1-c8577f6f6868 -t 2000 00:10:54.800 [ 00:10:54.800 { 00:10:54.800 "name": "260a52ca-0a9a-4819-98f1-c8577f6f6868", 00:10:54.800 "aliases": [ 00:10:54.800 "lvs/lvol" 00:10:54.800 ], 00:10:54.800 "product_name": "Logical Volume", 00:10:54.800 "block_size": 4096, 00:10:54.800 "num_blocks": 38912, 00:10:54.800 "uuid": "260a52ca-0a9a-4819-98f1-c8577f6f6868", 00:10:54.800 "assigned_rate_limits": { 00:10:54.800 "rw_ios_per_sec": 0, 00:10:54.800 "rw_mbytes_per_sec": 0, 00:10:54.800 "r_mbytes_per_sec": 0, 00:10:54.800 "w_mbytes_per_sec": 0 00:10:54.800 }, 00:10:54.800 "claimed": false, 00:10:54.800 "zoned": false, 00:10:54.800 "supported_io_types": { 00:10:54.800 "read": true, 00:10:54.800 "write": true, 00:10:54.800 "unmap": true, 00:10:54.800 "flush": false, 00:10:54.800 "reset": true, 00:10:54.800 "nvme_admin": false, 00:10:54.800 "nvme_io": false, 00:10:54.800 "nvme_io_md": false, 00:10:54.800 "write_zeroes": true, 00:10:54.800 "zcopy": false, 00:10:54.800 "get_zone_info": false, 00:10:54.800 "zone_management": false, 00:10:54.800 "zone_append": false, 00:10:54.800 "compare": false, 00:10:54.800 "compare_and_write": false, 00:10:54.800 "abort": false, 00:10:54.800 "seek_hole": true, 00:10:54.800 "seek_data": true, 00:10:54.800 "copy": false, 00:10:54.800 "nvme_iov_md": false 00:10:54.800 }, 00:10:54.800 "driver_specific": { 00:10:54.800 "lvol": { 00:10:54.800 "lvol_store_uuid": "e5898b6c-93f1-4920-9227-dbcfab77ba4f", 00:10:54.800 "base_bdev": "aio_bdev", 00:10:54.800 "thin_provision": false, 00:10:54.800 "num_allocated_clusters": 38, 00:10:54.800 "snapshot": false, 00:10:54.800 "clone": false, 00:10:54.800 "esnap_clone": false 00:10:54.800 } 00:10:54.800 } 00:10:54.800 } 00:10:54.800 ] 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:54.800 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:55.057 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:55.057 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:55.314 [2024-12-09 10:21:32.815757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:55.314 10:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:55.314 request: 00:10:55.314 { 00:10:55.314 "uuid": "e5898b6c-93f1-4920-9227-dbcfab77ba4f", 00:10:55.314 "method": "bdev_lvol_get_lvstores", 00:10:55.314 "req_id": 1 00:10:55.314 } 00:10:55.314 Got JSON-RPC error response 00:10:55.314 response: 00:10:55.314 { 00:10:55.314 "code": -19, 00:10:55.314 "message": "No such device" 00:10:55.314 } 00:10:55.314 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:55.314 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.314 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.314 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.314 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:55.572 aio_bdev 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.572 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:55.831 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 260a52ca-0a9a-4819-98f1-c8577f6f6868 -t 2000 00:10:56.089 [ 00:10:56.089 { 00:10:56.089 "name": "260a52ca-0a9a-4819-98f1-c8577f6f6868", 00:10:56.089 "aliases": [ 00:10:56.089 "lvs/lvol" 00:10:56.089 ], 00:10:56.089 "product_name": "Logical Volume", 00:10:56.089 "block_size": 4096, 00:10:56.089 "num_blocks": 38912, 00:10:56.089 "uuid": "260a52ca-0a9a-4819-98f1-c8577f6f6868", 00:10:56.089 "assigned_rate_limits": { 00:10:56.089 "rw_ios_per_sec": 0, 00:10:56.089 "rw_mbytes_per_sec": 0, 00:10:56.089 "r_mbytes_per_sec": 0, 00:10:56.089 "w_mbytes_per_sec": 0 00:10:56.089 }, 00:10:56.089 "claimed": false, 00:10:56.089 "zoned": false, 00:10:56.089 "supported_io_types": { 00:10:56.089 "read": true, 00:10:56.089 "write": true, 00:10:56.089 "unmap": true, 00:10:56.089 "flush": false, 00:10:56.089 "reset": true, 00:10:56.089 "nvme_admin": false, 00:10:56.089 "nvme_io": false, 00:10:56.089 "nvme_io_md": false, 00:10:56.089 "write_zeroes": true, 00:10:56.089 "zcopy": false, 00:10:56.089 "get_zone_info": false, 00:10:56.089 "zone_management": false, 00:10:56.089 "zone_append": false, 00:10:56.089 "compare": false, 00:10:56.089 "compare_and_write": false, 00:10:56.089 "abort": false, 00:10:56.089 "seek_hole": true, 00:10:56.089 "seek_data": true, 00:10:56.089 "copy": false, 00:10:56.089 "nvme_iov_md": false 00:10:56.089 }, 00:10:56.089 "driver_specific": { 00:10:56.089 "lvol": { 00:10:56.089 "lvol_store_uuid": "e5898b6c-93f1-4920-9227-dbcfab77ba4f", 00:10:56.089 "base_bdev": "aio_bdev", 00:10:56.089 "thin_provision": false, 00:10:56.089 "num_allocated_clusters": 38, 00:10:56.089 "snapshot": false, 00:10:56.089 "clone": false, 00:10:56.089 "esnap_clone": false 00:10:56.089 } 00:10:56.089 } 00:10:56.089 } 00:10:56.089 ] 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:56.089 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:56.348 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:56.348 10:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 260a52ca-0a9a-4819-98f1-c8577f6f6868 00:10:56.607 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e5898b6c-93f1-4920-9227-dbcfab77ba4f 00:10:56.865 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:56.865 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:56.865 00:10:56.865 real 0m16.880s 00:10:56.865 user 0m43.689s 00:10:56.865 sys 0m3.652s 00:10:56.865 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.865 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:56.865 ************************************ 00:10:56.865 END TEST lvs_grow_dirty 00:10:56.865 ************************************ 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:57.124 nvmf_trace.0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.124 rmmod nvme_tcp 00:10:57.124 rmmod nvme_fabrics 00:10:57.124 rmmod nvme_keyring 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2518376 ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2518376 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2518376 ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2518376 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518376 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518376' 00:10:57.124 killing process with pid 2518376 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2518376 00:10:57.124 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2518376 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.383 10:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.915 00:10:59.915 real 0m41.661s 00:10:59.915 user 1m4.190s 00:10:59.915 sys 0m10.193s 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:59.915 ************************************ 00:10:59.915 END TEST nvmf_lvs_grow 00:10:59.915 ************************************ 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.915 ************************************ 00:10:59.915 START TEST nvmf_bdev_io_wait 00:10:59.915 ************************************ 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:59.915 * Looking for test storage... 00:10:59.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.915 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.916 --rc genhtml_branch_coverage=1 00:10:59.916 --rc genhtml_function_coverage=1 00:10:59.916 --rc genhtml_legend=1 00:10:59.916 --rc geninfo_all_blocks=1 00:10:59.916 --rc geninfo_unexecuted_blocks=1 00:10:59.916 00:10:59.916 ' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.916 --rc genhtml_branch_coverage=1 00:10:59.916 --rc genhtml_function_coverage=1 00:10:59.916 --rc genhtml_legend=1 00:10:59.916 --rc geninfo_all_blocks=1 00:10:59.916 --rc geninfo_unexecuted_blocks=1 00:10:59.916 00:10:59.916 ' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.916 --rc genhtml_branch_coverage=1 00:10:59.916 --rc genhtml_function_coverage=1 00:10:59.916 --rc genhtml_legend=1 00:10:59.916 --rc geninfo_all_blocks=1 00:10:59.916 --rc geninfo_unexecuted_blocks=1 00:10:59.916 00:10:59.916 ' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.916 --rc genhtml_branch_coverage=1 00:10:59.916 --rc genhtml_function_coverage=1 00:10:59.916 --rc genhtml_legend=1 00:10:59.916 --rc geninfo_all_blocks=1 00:10:59.916 --rc geninfo_unexecuted_blocks=1 00:10:59.916 00:10:59.916 ' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.916 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.917 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.917 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.917 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.917 10:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:06.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:06.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:06.483 Found net devices under 0000:86:00.0: cvl_0_0 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.483 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:06.484 Found net devices under 0000:86:00.1: cvl_0_1 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.484 10:21:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:11:06.484 00:11:06.484 --- 10.0.0.2 ping statistics --- 00:11:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.484 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:06.484 00:11:06.484 --- 10.0.0.1 ping statistics --- 00:11:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.484 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2522442 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2522442 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2522442 ']' 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.484 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.484 [2024-12-09 10:21:43.315432] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.484 [2024-12-09 10:21:43.315474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.484 [2024-12-09 10:21:43.392015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.484 [2024-12-09 10:21:43.435795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.484 [2024-12-09 10:21:43.435836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.484 [2024-12-09 10:21:43.435845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.484 [2024-12-09 10:21:43.435851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.484 [2024-12-09 10:21:43.435856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.484 [2024-12-09 10:21:43.440826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.485 [2024-12-09 10:21:43.440854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.485 [2024-12-09 10:21:43.440984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.485 [2024-12-09 10:21:43.440985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 [2024-12-09 10:21:44.263484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 Malloc0 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.744 [2024-12-09 10:21:44.318893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2522688 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2522690 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.744 { 00:11:06.744 "params": { 00:11:06.744 "name": "Nvme$subsystem", 00:11:06.744 "trtype": "$TEST_TRANSPORT", 00:11:06.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.744 "adrfam": "ipv4", 00:11:06.744 "trsvcid": "$NVMF_PORT", 00:11:06.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.744 "hdgst": ${hdgst:-false}, 00:11:06.744 "ddgst": ${ddgst:-false} 00:11:06.744 }, 00:11:06.744 "method": "bdev_nvme_attach_controller" 00:11:06.744 } 00:11:06.744 EOF 00:11:06.744 )") 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2522692 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.744 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.745 { 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme$subsystem", 00:11:06.745 "trtype": "$TEST_TRANSPORT", 00:11:06.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "$NVMF_PORT", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.745 "hdgst": ${hdgst:-false}, 00:11:06.745 "ddgst": ${ddgst:-false} 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 } 00:11:06.745 EOF 00:11:06.745 )") 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2522695 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.745 { 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme$subsystem", 00:11:06.745 "trtype": "$TEST_TRANSPORT", 00:11:06.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "$NVMF_PORT", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.745 "hdgst": ${hdgst:-false}, 00:11:06.745 "ddgst": ${ddgst:-false} 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 } 00:11:06.745 EOF 00:11:06.745 )") 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.745 { 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme$subsystem", 00:11:06.745 "trtype": "$TEST_TRANSPORT", 00:11:06.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "$NVMF_PORT", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.745 "hdgst": ${hdgst:-false}, 00:11:06.745 "ddgst": ${ddgst:-false} 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 } 00:11:06.745 EOF 00:11:06.745 )") 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2522688 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme1", 00:11:06.745 "trtype": "tcp", 00:11:06.745 "traddr": "10.0.0.2", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "4420", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.745 "hdgst": false, 00:11:06.745 "ddgst": false 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 }' 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme1", 00:11:06.745 "trtype": "tcp", 00:11:06.745 "traddr": "10.0.0.2", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "4420", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.745 "hdgst": false, 00:11:06.745 "ddgst": false 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 }' 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme1", 00:11:06.745 "trtype": "tcp", 00:11:06.745 "traddr": "10.0.0.2", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "4420", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.745 "hdgst": false, 00:11:06.745 "ddgst": false 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 }' 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:06.745 10:21:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.745 "params": { 00:11:06.745 "name": "Nvme1", 00:11:06.745 "trtype": "tcp", 00:11:06.745 "traddr": "10.0.0.2", 00:11:06.745 "adrfam": "ipv4", 00:11:06.745 "trsvcid": "4420", 00:11:06.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.745 "hdgst": false, 00:11:06.745 "ddgst": false 00:11:06.745 }, 00:11:06.745 "method": "bdev_nvme_attach_controller" 00:11:06.745 }' 00:11:06.745 [2024-12-09 10:21:44.371103] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.745 [2024-12-09 10:21:44.371151] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:06.745 [2024-12-09 10:21:44.371455] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.745 [2024-12-09 10:21:44.371493] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:06.745 [2024-12-09 10:21:44.371994] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.745 [2024-12-09 10:21:44.372031] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:06.745 [2024-12-09 10:21:44.375048] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.745 [2024-12-09 10:21:44.375091] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:07.004 [2024-12-09 10:21:44.561351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.004 [2024-12-09 10:21:44.601781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:07.004 [2024-12-09 10:21:44.652977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.004 [2024-12-09 10:21:44.695866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:07.263 [2024-12-09 10:21:44.746038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.263 [2024-12-09 10:21:44.799039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:07.263 [2024-12-09 10:21:44.805924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.263 [2024-12-09 10:21:44.848443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:07.263 Running I/O for 1 seconds... 00:11:07.522 Running I/O for 1 seconds... 00:11:07.522 Running I/O for 1 seconds... 00:11:07.522 Running I/O for 1 seconds... 00:11:08.457 12410.00 IOPS, 48.48 MiB/s 00:11:08.457 Latency(us) 00:11:08.457 [2024-12-09T09:21:46.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.457 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:08.457 Nvme1n1 : 1.01 12467.24 48.70 0.00 0.00 10233.45 5211.67 15166.90 00:11:08.457 [2024-12-09T09:21:46.181Z] =================================================================================================================== 00:11:08.457 [2024-12-09T09:21:46.181Z] Total : 12467.24 48.70 0.00 0.00 10233.45 5211.67 15166.90 00:11:08.457 11421.00 IOPS, 44.61 MiB/s 00:11:08.457 Latency(us) 00:11:08.457 [2024-12-09T09:21:46.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.457 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:08.457 Nvme1n1 : 1.01 11489.97 44.88 0.00 0.00 11106.54 4306.65 19598.38 00:11:08.457 [2024-12-09T09:21:46.181Z] =================================================================================================================== 00:11:08.457 [2024-12-09T09:21:46.181Z] Total : 11489.97 44.88 0.00 0.00 11106.54 4306.65 19598.38 00:11:08.457 242880.00 IOPS, 948.75 MiB/s 00:11:08.457 Latency(us) 00:11:08.457 [2024-12-09T09:21:46.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.457 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:08.457 Nvme1n1 : 1.00 242505.33 947.29 0.00 0.00 524.95 221.38 1521.37 00:11:08.457 [2024-12-09T09:21:46.181Z] =================================================================================================================== 00:11:08.457 [2024-12-09T09:21:46.181Z] Total : 242505.33 947.29 0.00 0.00 524.95 221.38 1521.37 00:11:08.457 10169.00 IOPS, 39.72 MiB/s 00:11:08.457 Latency(us) 00:11:08.457 [2024-12-09T09:21:46.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.457 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:08.457 Nvme1n1 : 1.01 10240.55 40.00 0.00 0.00 12459.95 4712.35 24092.28 00:11:08.457 [2024-12-09T09:21:46.181Z] =================================================================================================================== 00:11:08.457 [2024-12-09T09:21:46.181Z] Total : 10240.55 40.00 0.00 0.00 12459.95 4712.35 24092.28 00:11:08.457 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2522690 00:11:08.457 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2522692 00:11:08.457 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2522695 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.716 rmmod nvme_tcp 00:11:08.716 rmmod nvme_fabrics 00:11:08.716 rmmod nvme_keyring 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2522442 ']' 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2522442 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2522442 ']' 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2522442 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522442 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522442' 00:11:08.716 killing process with pid 2522442 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2522442 00:11:08.716 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2522442 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.975 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.976 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.976 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.976 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.976 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.880 00:11:10.880 real 0m11.453s 00:11:10.880 user 0m19.014s 00:11:10.880 sys 0m6.231s 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.880 ************************************ 00:11:10.880 END TEST nvmf_bdev_io_wait 00:11:10.880 ************************************ 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.880 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.139 ************************************ 00:11:11.139 START TEST nvmf_queue_depth 00:11:11.139 ************************************ 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:11.139 * Looking for test storage... 00:11:11.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.139 --rc genhtml_branch_coverage=1 00:11:11.139 --rc genhtml_function_coverage=1 00:11:11.139 --rc genhtml_legend=1 00:11:11.139 --rc geninfo_all_blocks=1 00:11:11.139 --rc geninfo_unexecuted_blocks=1 00:11:11.139 00:11:11.139 ' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.139 --rc genhtml_branch_coverage=1 00:11:11.139 --rc genhtml_function_coverage=1 00:11:11.139 --rc genhtml_legend=1 00:11:11.139 --rc geninfo_all_blocks=1 00:11:11.139 --rc geninfo_unexecuted_blocks=1 00:11:11.139 00:11:11.139 ' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.139 --rc genhtml_branch_coverage=1 00:11:11.139 --rc genhtml_function_coverage=1 00:11:11.139 --rc genhtml_legend=1 00:11:11.139 --rc geninfo_all_blocks=1 00:11:11.139 --rc geninfo_unexecuted_blocks=1 00:11:11.139 00:11:11.139 ' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.139 --rc genhtml_branch_coverage=1 00:11:11.139 --rc genhtml_function_coverage=1 00:11:11.139 --rc genhtml_legend=1 00:11:11.139 --rc geninfo_all_blocks=1 00:11:11.139 --rc geninfo_unexecuted_blocks=1 00:11:11.139 00:11:11.139 ' 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.139 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.140 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:17.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:17.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:17.711 Found net devices under 0000:86:00.0: cvl_0_0 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:17.711 Found net devices under 0000:86:00.1: cvl_0_1 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.711 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:11:17.712 00:11:17.712 --- 10.0.0.2 ping statistics --- 00:11:17.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.712 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:17.712 00:11:17.712 --- 10.0.0.1 ping statistics --- 00:11:17.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.712 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2526574 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2526574 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2526574 ']' 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.712 10:21:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 [2024-12-09 10:21:54.902625] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:17.712 [2024-12-09 10:21:54.902675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.712 [2024-12-09 10:21:54.987375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.712 [2024-12-09 10:21:55.027237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.712 [2024-12-09 10:21:55.027275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.712 [2024-12-09 10:21:55.027284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.712 [2024-12-09 10:21:55.027289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.712 [2024-12-09 10:21:55.027294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.712 [2024-12-09 10:21:55.027887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 [2024-12-09 10:21:55.172562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 Malloc0 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 [2024-12-09 10:21:55.222869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2526725 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2526725 /var/tmp/bdevperf.sock 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2526725 ']' 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:17.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.712 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 [2024-12-09 10:21:55.271249] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:17.712 [2024-12-09 10:21:55.271290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526725 ] 00:11:17.712 [2024-12-09 10:21:55.343112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.712 [2024-12-09 10:21:55.387155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.970 NVMe0n1 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.970 10:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:17.970 Running I/O for 10 seconds... 00:11:19.972 11958.00 IOPS, 46.71 MiB/s [2024-12-09T09:21:59.071Z] 12287.50 IOPS, 48.00 MiB/s [2024-12-09T09:22:00.010Z] 12383.33 IOPS, 48.37 MiB/s [2024-12-09T09:22:00.965Z] 12523.75 IOPS, 48.92 MiB/s [2024-12-09T09:22:01.902Z] 12522.80 IOPS, 48.92 MiB/s [2024-12-09T09:22:02.837Z] 12535.67 IOPS, 48.97 MiB/s [2024-12-09T09:22:03.773Z] 12561.00 IOPS, 49.07 MiB/s [2024-12-09T09:22:04.714Z] 12586.75 IOPS, 49.17 MiB/s [2024-12-09T09:22:06.091Z] 12613.11 IOPS, 49.27 MiB/s [2024-12-09T09:22:06.091Z] 12626.90 IOPS, 49.32 MiB/s 00:11:28.367 Latency(us) 00:11:28.367 [2024-12-09T09:22:06.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.367 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:28.367 Verification LBA range: start 0x0 length 0x4000 00:11:28.367 NVMe0n1 : 10.06 12656.20 49.44 0.00 0.00 80616.79 17476.27 52428.80 00:11:28.367 [2024-12-09T09:22:06.091Z] =================================================================================================================== 00:11:28.367 [2024-12-09T09:22:06.091Z] Total : 12656.20 49.44 0.00 0.00 80616.79 17476.27 52428.80 00:11:28.367 { 00:11:28.367 "results": [ 00:11:28.367 { 00:11:28.367 "job": "NVMe0n1", 00:11:28.367 "core_mask": "0x1", 00:11:28.367 "workload": "verify", 00:11:28.367 "status": "finished", 00:11:28.367 "verify_range": { 00:11:28.367 "start": 0, 00:11:28.367 "length": 16384 00:11:28.367 }, 00:11:28.367 "queue_depth": 1024, 00:11:28.367 "io_size": 4096, 00:11:28.367 "runtime": 10.057756, 00:11:28.367 "iops": 12656.202834906713, 00:11:28.367 "mibps": 49.43829232385435, 00:11:28.367 "io_failed": 0, 00:11:28.367 "io_timeout": 0, 00:11:28.367 "avg_latency_us": 80616.79255034037, 00:11:28.367 "min_latency_us": 17476.266666666666, 00:11:28.367 "max_latency_us": 52428.8 00:11:28.367 } 00:11:28.367 ], 00:11:28.367 "core_count": 1 00:11:28.367 } 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2526725 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2526725 ']' 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2526725 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526725 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526725' 00:11:28.367 killing process with pid 2526725 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2526725 00:11:28.367 Received shutdown signal, test time was about 10.000000 seconds 00:11:28.367 00:11:28.367 Latency(us) 00:11:28.367 [2024-12-09T09:22:06.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.367 [2024-12-09T09:22:06.091Z] =================================================================================================================== 00:11:28.367 [2024-12-09T09:22:06.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2526725 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.367 10:22:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.367 rmmod nvme_tcp 00:11:28.367 rmmod nvme_fabrics 00:11:28.367 rmmod nvme_keyring 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2526574 ']' 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2526574 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2526574 ']' 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2526574 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.367 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526574 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526574' 00:11:28.626 killing process with pid 2526574 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2526574 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2526574 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.626 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.627 10:22:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.186 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.187 00:11:31.187 real 0m19.761s 00:11:31.187 user 0m23.015s 00:11:31.187 sys 0m6.109s 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.187 ************************************ 00:11:31.187 END TEST nvmf_queue_depth 00:11:31.187 ************************************ 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.187 ************************************ 00:11:31.187 START TEST nvmf_target_multipath 00:11:31.187 ************************************ 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:31.187 * Looking for test storage... 00:11:31.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.187 --rc genhtml_branch_coverage=1 00:11:31.187 --rc genhtml_function_coverage=1 00:11:31.187 --rc genhtml_legend=1 00:11:31.187 --rc geninfo_all_blocks=1 00:11:31.187 --rc geninfo_unexecuted_blocks=1 00:11:31.187 00:11:31.187 ' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.187 --rc genhtml_branch_coverage=1 00:11:31.187 --rc genhtml_function_coverage=1 00:11:31.187 --rc genhtml_legend=1 00:11:31.187 --rc geninfo_all_blocks=1 00:11:31.187 --rc geninfo_unexecuted_blocks=1 00:11:31.187 00:11:31.187 ' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.187 --rc genhtml_branch_coverage=1 00:11:31.187 --rc genhtml_function_coverage=1 00:11:31.187 --rc genhtml_legend=1 00:11:31.187 --rc geninfo_all_blocks=1 00:11:31.187 --rc geninfo_unexecuted_blocks=1 00:11:31.187 00:11:31.187 ' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.187 --rc genhtml_branch_coverage=1 00:11:31.187 --rc genhtml_function_coverage=1 00:11:31.187 --rc genhtml_legend=1 00:11:31.187 --rc geninfo_all_blocks=1 00:11:31.187 --rc geninfo_unexecuted_blocks=1 00:11:31.187 00:11:31.187 ' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.187 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.188 10:22:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:37.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:37.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:37.760 Found net devices under 0000:86:00.0: cvl_0_0 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.760 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:37.761 Found net devices under 0000:86:00.1: cvl_0_1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:37.761 00:11:37.761 --- 10.0.0.2 ping statistics --- 00:11:37.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.761 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:37.761 00:11:37.761 --- 10.0.0.1 ping statistics --- 00:11:37.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.761 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:37.761 only one NIC for nvmf test 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.761 rmmod nvme_tcp 00:11:37.761 rmmod nvme_fabrics 00:11:37.761 rmmod nvme_keyring 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.761 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.138 00:11:39.138 real 0m8.391s 00:11:39.138 user 0m1.927s 00:11:39.138 sys 0m4.462s 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.138 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:39.138 ************************************ 00:11:39.138 END TEST nvmf_target_multipath 00:11:39.138 ************************************ 00:11:39.397 10:22:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:39.397 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.397 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.397 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.397 ************************************ 00:11:39.397 START TEST nvmf_zcopy 00:11:39.397 ************************************ 00:11:39.397 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:39.397 * Looking for test storage... 00:11:39.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.397 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.398 --rc genhtml_branch_coverage=1 00:11:39.398 --rc genhtml_function_coverage=1 00:11:39.398 --rc genhtml_legend=1 00:11:39.398 --rc geninfo_all_blocks=1 00:11:39.398 --rc geninfo_unexecuted_blocks=1 00:11:39.398 00:11:39.398 ' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.398 --rc genhtml_branch_coverage=1 00:11:39.398 --rc genhtml_function_coverage=1 00:11:39.398 --rc genhtml_legend=1 00:11:39.398 --rc geninfo_all_blocks=1 00:11:39.398 --rc geninfo_unexecuted_blocks=1 00:11:39.398 00:11:39.398 ' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.398 --rc genhtml_branch_coverage=1 00:11:39.398 --rc genhtml_function_coverage=1 00:11:39.398 --rc genhtml_legend=1 00:11:39.398 --rc geninfo_all_blocks=1 00:11:39.398 --rc geninfo_unexecuted_blocks=1 00:11:39.398 00:11:39.398 ' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.398 --rc genhtml_branch_coverage=1 00:11:39.398 --rc genhtml_function_coverage=1 00:11:39.398 --rc genhtml_legend=1 00:11:39.398 --rc geninfo_all_blocks=1 00:11:39.398 --rc geninfo_unexecuted_blocks=1 00:11:39.398 00:11:39.398 ' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.398 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.656 10:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:46.228 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:46.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:46.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:46.229 Found net devices under 0000:86:00.0: cvl_0_0 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:46.229 Found net devices under 0000:86:00.1: cvl_0_1 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.229 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.230 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.230 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.230 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.230 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.230 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:11:46.230 00:11:46.230 --- 10.0.0.2 ping statistics --- 00:11:46.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.230 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:11:46.230 00:11:46.230 --- 10.0.0.1 ping statistics --- 00:11:46.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.230 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2535522 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2535522 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2535522 ']' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 [2024-12-09 10:22:23.197856] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:46.230 [2024-12-09 10:22:23.197901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.230 [2024-12-09 10:22:23.275947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.230 [2024-12-09 10:22:23.315990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.230 [2024-12-09 10:22:23.316024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.230 [2024-12-09 10:22:23.316031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.230 [2024-12-09 10:22:23.316036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.230 [2024-12-09 10:22:23.316041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.230 [2024-12-09 10:22:23.316600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 [2024-12-09 10:22:23.451150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.230 [2024-12-09 10:22:23.471321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.230 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.231 malloc0 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:46.231 { 00:11:46.231 "params": { 00:11:46.231 "name": "Nvme$subsystem", 00:11:46.231 "trtype": "$TEST_TRANSPORT", 00:11:46.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.231 "adrfam": "ipv4", 00:11:46.231 "trsvcid": "$NVMF_PORT", 00:11:46.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.231 "hdgst": ${hdgst:-false}, 00:11:46.231 "ddgst": ${ddgst:-false} 00:11:46.231 }, 00:11:46.231 "method": "bdev_nvme_attach_controller" 00:11:46.231 } 00:11:46.231 EOF 00:11:46.231 )") 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:46.231 10:22:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:46.231 "params": { 00:11:46.231 "name": "Nvme1", 00:11:46.231 "trtype": "tcp", 00:11:46.231 "traddr": "10.0.0.2", 00:11:46.231 "adrfam": "ipv4", 00:11:46.231 "trsvcid": "4420", 00:11:46.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.231 "hdgst": false, 00:11:46.231 "ddgst": false 00:11:46.231 }, 00:11:46.231 "method": "bdev_nvme_attach_controller" 00:11:46.231 }' 00:11:46.231 [2024-12-09 10:22:23.553310] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:46.231 [2024-12-09 10:22:23.553356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535650 ] 00:11:46.231 [2024-12-09 10:22:23.628556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.231 [2024-12-09 10:22:23.669154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.231 Running I/O for 10 seconds... 00:11:48.329 8674.00 IOPS, 67.77 MiB/s [2024-12-09T09:22:26.997Z] 8772.50 IOPS, 68.54 MiB/s [2024-12-09T09:22:27.931Z] 8804.67 IOPS, 68.79 MiB/s [2024-12-09T09:22:29.309Z] 8817.75 IOPS, 68.89 MiB/s [2024-12-09T09:22:30.245Z] 8828.20 IOPS, 68.97 MiB/s [2024-12-09T09:22:31.178Z] 8834.67 IOPS, 69.02 MiB/s [2024-12-09T09:22:32.111Z] 8810.00 IOPS, 68.83 MiB/s [2024-12-09T09:22:33.045Z] 8817.88 IOPS, 68.89 MiB/s [2024-12-09T09:22:33.978Z] 8822.44 IOPS, 68.93 MiB/s [2024-12-09T09:22:33.978Z] 8827.90 IOPS, 68.97 MiB/s 00:11:56.254 Latency(us) 00:11:56.254 [2024-12-09T09:22:33.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.254 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:56.254 Verification LBA range: start 0x0 length 0x1000 00:11:56.254 Nvme1n1 : 10.01 8833.10 69.01 0.00 0.00 14450.12 388.14 22344.66 00:11:56.254 [2024-12-09T09:22:33.978Z] =================================================================================================================== 00:11:56.254 [2024-12-09T09:22:33.978Z] Total : 8833.10 69.01 0.00 0.00 14450.12 388.14 22344.66 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2537276 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:56.512 { 00:11:56.512 "params": { 00:11:56.512 "name": "Nvme$subsystem", 00:11:56.512 "trtype": "$TEST_TRANSPORT", 00:11:56.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.512 "adrfam": "ipv4", 00:11:56.512 "trsvcid": "$NVMF_PORT", 00:11:56.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.512 "hdgst": ${hdgst:-false}, 00:11:56.512 "ddgst": ${ddgst:-false} 00:11:56.512 }, 00:11:56.512 "method": "bdev_nvme_attach_controller" 00:11:56.512 } 00:11:56.512 EOF 00:11:56.512 )") 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:56.512 [2024-12-09 10:22:34.070784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.070821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:56.512 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:56.512 "params": { 00:11:56.512 "name": "Nvme1", 00:11:56.512 "trtype": "tcp", 00:11:56.512 "traddr": "10.0.0.2", 00:11:56.512 "adrfam": "ipv4", 00:11:56.512 "trsvcid": "4420", 00:11:56.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.512 "hdgst": false, 00:11:56.512 "ddgst": false 00:11:56.512 }, 00:11:56.512 "method": "bdev_nvme_attach_controller" 00:11:56.512 }' 00:11:56.512 [2024-12-09 10:22:34.082785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.082797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.094816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.094831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.106847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.106858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.107147] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:56.512 [2024-12-09 10:22:34.107188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537276 ] 00:11:56.512 [2024-12-09 10:22:34.118872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.118884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.130906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.130916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.142936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.142946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.154968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.154978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.167001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.167011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.179031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.179041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.181334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.512 [2024-12-09 10:22:34.191067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.191081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.203116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.203127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.215143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.215155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.512 [2024-12-09 10:22:34.223090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.512 [2024-12-09 10:22:34.227170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.512 [2024-12-09 10:22:34.227182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.239214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.239236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.251237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.251252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.263267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.263281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.275298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.275309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.287330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.287349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.299360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.299369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.311409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.311430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.323436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.323450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.335466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.335481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.347502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.347516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.359534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.359548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.371571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.371589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 Running I/O for 5 seconds... 00:11:56.770 [2024-12-09 10:22:34.383598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.383608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.398832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.398855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.770 [2024-12-09 10:22:34.413476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.770 [2024-12-09 10:22:34.413495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.771 [2024-12-09 10:22:34.428984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.771 [2024-12-09 10:22:34.429014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.771 [2024-12-09 10:22:34.443194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.771 [2024-12-09 10:22:34.443219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.771 [2024-12-09 10:22:34.457307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.771 [2024-12-09 10:22:34.457326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.771 [2024-12-09 10:22:34.471187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.771 [2024-12-09 10:22:34.471205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.771 [2024-12-09 10:22:34.485079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.771 [2024-12-09 10:22:34.485097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.498973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.498991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.512660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.512678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.526170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.526188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.539429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.539452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.553120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.553138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.030 [2024-12-09 10:22:34.566773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.030 [2024-12-09 10:22:34.566792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.580754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.580772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.594979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.594997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.608754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.608773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.622745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.622763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.636763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.636781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.650597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.650616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.664575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.664594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.678262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.678280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.692091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.692109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.706000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.706018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.719686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.719705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.733311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.733333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.031 [2024-12-09 10:22:34.747366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.031 [2024-12-09 10:22:34.747385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.760781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.760800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.774699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.774717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.788546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.788564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.801972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.801996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.816079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.816097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.830227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.830245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.843908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.843927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.857536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.857555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.871487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.871505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.885158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.885176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.899412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.899431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.913690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.913708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.925046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.925074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.939269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.939287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.952963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.952981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.967052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.967081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.980607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.980624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:34.994319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:34.994336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.290 [2024-12-09 10:22:35.008190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.290 [2024-12-09 10:22:35.008207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.021794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.021817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.035560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.035578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.049466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.049487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.063416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.063435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.077042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.077060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.090733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.090751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.104965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.555 [2024-12-09 10:22:35.104983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.555 [2024-12-09 10:22:35.118568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.118586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.132609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.132628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.146772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.146790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.160740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.160758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.174325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.174344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.188245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.188263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.201470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.201488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.215539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.215558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.229119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.229137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.242815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.242849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.257477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.257495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.556 [2024-12-09 10:22:35.271032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.556 [2024-12-09 10:22:35.271050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.284906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.284924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.298386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.298404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.312279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.312297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.326115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.326138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.339881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.339901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.353738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.353758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.367801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.367827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.381690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.381709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 16835.00 IOPS, 131.52 MiB/s [2024-12-09T09:22:35.540Z] [2024-12-09 10:22:35.395879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.395896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.409631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.409649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.423333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.423350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.436792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.436816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.450921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.450939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.464453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.464472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.477932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.477952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.491739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.491758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.505372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.505391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.519328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.519346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.816 [2024-12-09 10:22:35.533007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.816 [2024-12-09 10:22:35.533026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.546508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.546527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.560203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.560221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.573821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.573844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.587598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.587617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.601315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.601334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.614991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.615010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.628652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.628670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.642116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.642135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.655875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.655894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.669889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.669910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.683612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.683631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.696924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.696943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.710609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.710627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.724583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.724601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.738441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.738462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.752509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.752529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.766562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.766581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.780236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.780254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.075 [2024-12-09 10:22:35.794377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.075 [2024-12-09 10:22:35.794397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.805769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.805789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.819634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.819652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.833864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.833888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.844936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.844956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.859191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.859209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.873048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.873076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.886738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.886756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.900407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.900425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.914198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.914216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.928119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.928137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.942123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.942141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.956151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.956169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.969585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.969603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.983542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.983561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:35.997200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:35.997218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:36.010975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:36.010993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:36.024445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:36.024463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:36.038119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:36.038138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.334 [2024-12-09 10:22:36.052131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.334 [2024-12-09 10:22:36.052149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.065713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.065736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.079751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.079769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.093450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.093472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.107140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.107158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.120701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.120719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.134363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.134383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.148557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.148575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.162154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.162171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.176019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.176037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.189765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.189784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.203475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.203493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.217442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.217461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.231133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.231152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.245028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.245046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.258904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.258922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.272738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.272756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.286168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.593 [2024-12-09 10:22:36.286185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.593 [2024-12-09 10:22:36.299869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.594 [2024-12-09 10:22:36.299892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.594 [2024-12-09 10:22:36.313355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.594 [2024-12-09 10:22:36.313374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.327286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.327303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.341394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.341411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.352847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.352865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.367564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.367582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.380852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.380869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 16920.00 IOPS, 132.19 MiB/s [2024-12-09T09:22:36.577Z] [2024-12-09 10:22:36.394883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.394901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.408561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.408579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.422697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.422715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.436815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.436832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.450484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.450503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.464263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.464281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.477771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.477788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.491404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.491423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.505540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.505559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.519382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.519401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.533078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.533096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.546864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.546883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.560505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.560523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.853 [2024-12-09 10:22:36.574300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.853 [2024-12-09 10:22:36.574318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.588219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.588237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.601979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.601997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.615705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.615723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.629401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.629418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.643281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.643298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.657056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.657073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.670784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.670801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.684252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.684270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.698196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.698214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.712206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.712227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.725814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.725832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.739741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.739760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.753478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.753495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.767589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.767607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.781772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.781790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.795429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.795447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.809296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.809316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.822720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.822739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.113 [2024-12-09 10:22:36.836448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.113 [2024-12-09 10:22:36.836467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.372 [2024-12-09 10:22:36.850077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.372 [2024-12-09 10:22:36.850098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.372 [2024-12-09 10:22:36.863705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.372 [2024-12-09 10:22:36.863724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.372 [2024-12-09 10:22:36.877878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.372 [2024-12-09 10:22:36.877898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.372 [2024-12-09 10:22:36.888504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.888523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.902805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.902831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.916235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.916254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.929937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.929956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.943718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.943737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.957334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.957353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.971056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.971074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.985219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.985237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:36.998741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:36.998759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.012667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.012686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.026081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.026099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.039969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.039989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.053690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.053709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.067477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.067496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.081555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.081573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.373 [2024-12-09 10:22:37.095082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.373 [2024-12-09 10:22:37.095101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.108800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.108826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.122481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.122504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.136284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.136303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.150452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.150474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.164418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.164436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.178544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.178563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.189635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.189654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.203898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.203916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.217531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.217549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.231297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.231316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.245213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.245231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.259059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.259077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.272335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.272353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.286325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.286342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.299830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.299847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.313574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.313592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.327160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.327178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.341046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.341064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.632 [2024-12-09 10:22:37.354557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.632 [2024-12-09 10:22:37.354576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.368280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.368298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.382213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.382235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 16957.33 IOPS, 132.48 MiB/s [2024-12-09T09:22:37.615Z] [2024-12-09 10:22:37.395351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.395369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.409460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.409478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.423348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.423366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.436631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.436649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.450731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.450748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.464433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.464451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.478143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.478161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.491689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.491706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.505461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.505478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.519122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.519140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.532835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.532853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.546451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.546469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.560342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.560360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.574236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.574253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.588124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.588142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.601688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.601706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.891 [2024-12-09 10:22:37.615131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.891 [2024-12-09 10:22:37.615148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.628358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.628376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.642195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.642217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.655661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.655679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.669241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.669260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.682753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.682770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.696064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.696082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.709629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.709647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.723437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.723455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.737333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.737351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.750756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.750773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.764548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.764566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.778048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.778066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.792148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.792165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.805851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.805869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.819650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.819670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.833304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.833322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.846975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.846994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.150 [2024-12-09 10:22:37.860599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.150 [2024-12-09 10:22:37.860617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.874772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.874790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.888333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.888350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.901616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.901634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.915035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.915054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.928853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.928872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.942079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.942097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.955600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.955618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.969007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.969025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.982639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.982658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:37.996711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:37.996729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.010191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.010209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.024524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.024542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.038218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.038236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.051554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.051572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.065338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.065356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.078979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.079006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.092510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.092529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.105849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.105868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.408 [2024-12-09 10:22:38.119720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.408 [2024-12-09 10:22:38.119738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.133450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.133468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.147224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.147242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.161082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.161100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.174887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.174905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.188628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.188646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.202590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.202608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.213456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.213474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.227588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.227606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.241633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.241652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.255119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.255137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.268982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.269001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.283040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.283059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.297218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.297238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.312967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.312986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.326657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.326676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.340249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.340268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.354144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.354163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.367659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.367678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.667 [2024-12-09 10:22:38.381317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.667 [2024-12-09 10:22:38.381336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 17005.25 IOPS, 132.85 MiB/s [2024-12-09T09:22:38.651Z] [2024-12-09 10:22:38.395287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.395307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.409338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.409362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.419596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.419614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.433251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.433268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.446534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.446553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.460141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.460161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.473741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.473759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.487561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.487580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.501159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.501178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.514942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.514961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.529307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.529325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.540159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.540177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.554300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.554318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.568413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.568432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.582067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.582086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.595789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.595814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.609165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.609183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.622825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.622844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.927 [2024-12-09 10:22:38.636580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.927 [2024-12-09 10:22:38.636598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.650842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.650860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.666500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.666522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.680329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.680347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.694161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.694179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.708090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.708107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.722075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.722093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.735754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.735772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.749567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.749584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.763409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.763427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.776830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.776848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.790706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.790724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.804394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.804412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.817999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.818017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.831899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.831917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.845638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.845657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.859520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.859540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.873269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.873287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.887360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.887379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.186 [2024-12-09 10:22:38.898392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.186 [2024-12-09 10:22:38.898410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.912556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.912574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.926061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.926087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.939798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.939820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.953667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.953685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.967286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.967304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.980622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.980640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:38.994624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:38.994643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.008276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.008294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.022120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.022138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.036341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.036359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.049818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.049836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.063767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.063785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.077492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.077510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.091233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.091252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.104856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.104874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.119210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.119228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.132714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.132732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.146531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.146549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.445 [2024-12-09 10:22:39.160334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.445 [2024-12-09 10:22:39.160352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.174206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.174223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.187766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.187788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.201699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.201718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.214910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.214929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.228561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.228579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.242656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.242674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.256619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.256637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.270355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.270374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.284481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.284500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.297920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.297937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.311681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.311699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.325520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.325539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.339001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.339019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.352342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.352360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.366420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.366439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.379871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.379888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.393442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.393460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 17014.60 IOPS, 132.93 MiB/s 00:12:01.711 Latency(us) 00:12:01.711 [2024-12-09T09:22:39.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.711 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:01.711 Nvme1n1 : 5.00 17023.72 133.00 0.00 0.00 7512.88 3542.06 15416.56 00:12:01.711 [2024-12-09T09:22:39.435Z] =================================================================================================================== 00:12:01.711 [2024-12-09T09:22:39.435Z] Total : 17023.72 133.00 0.00 0.00 7512.88 3542.06 15416.56 00:12:01.711 [2024-12-09 10:22:39.403509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.403526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.415537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.415552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.711 [2024-12-09 10:22:39.427574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.711 [2024-12-09 10:22:39.427588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.439606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.439622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.451635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.451649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.463668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.463682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.475698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.475712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.487728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.487741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.499779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.499801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.511797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.511815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.523829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.523838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.535862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.535876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.547892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.547903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 [2024-12-09 10:22:39.559925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.970 [2024-12-09 10:22:39.559936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2537276) - No such process 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2537276 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.970 delay0 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.970 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:02.228 [2024-12-09 10:22:39.709720] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:08.791 Initializing NVMe Controllers 00:12:08.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:08.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:08.791 Initialization complete. Launching workers. 00:12:08.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 00:12:08.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 370, failed to submit 33 00:12:08.791 success 182, unsuccessful 188, failed 0 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.791 rmmod nvme_tcp 00:12:08.791 rmmod nvme_fabrics 00:12:08.791 rmmod nvme_keyring 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2535522 ']' 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2535522 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2535522 ']' 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2535522 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535522 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535522' 00:12:08.791 killing process with pid 2535522 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2535522 00:12:08.791 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2535522 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.791 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.697 00:12:10.697 real 0m31.254s 00:12:10.697 user 0m41.637s 00:12:10.697 sys 0m11.087s 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.697 ************************************ 00:12:10.697 END TEST nvmf_zcopy 00:12:10.697 ************************************ 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:10.697 ************************************ 00:12:10.697 START TEST nvmf_nmic 00:12:10.697 ************************************ 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.697 * Looking for test storage... 00:12:10.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.697 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:10.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.698 --rc genhtml_branch_coverage=1 00:12:10.698 --rc genhtml_function_coverage=1 00:12:10.698 --rc genhtml_legend=1 00:12:10.698 --rc geninfo_all_blocks=1 00:12:10.698 --rc geninfo_unexecuted_blocks=1 00:12:10.698 00:12:10.698 ' 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:10.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.698 --rc genhtml_branch_coverage=1 00:12:10.698 --rc genhtml_function_coverage=1 00:12:10.698 --rc genhtml_legend=1 00:12:10.698 --rc geninfo_all_blocks=1 00:12:10.698 --rc geninfo_unexecuted_blocks=1 00:12:10.698 00:12:10.698 ' 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:10.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.698 --rc genhtml_branch_coverage=1 00:12:10.698 --rc genhtml_function_coverage=1 00:12:10.698 --rc genhtml_legend=1 00:12:10.698 --rc geninfo_all_blocks=1 00:12:10.698 --rc geninfo_unexecuted_blocks=1 00:12:10.698 00:12:10.698 ' 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:10.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.698 --rc genhtml_branch_coverage=1 00:12:10.698 --rc genhtml_function_coverage=1 00:12:10.698 --rc genhtml_legend=1 00:12:10.698 --rc geninfo_all_blocks=1 00:12:10.698 --rc geninfo_unexecuted_blocks=1 00:12:10.698 00:12:10.698 ' 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.698 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.958 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.527 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:17.528 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:17.528 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:17.528 Found net devices under 0000:86:00.0: cvl_0_0 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:17.528 Found net devices under 0000:86:00.1: cvl_0_1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:12:17.528 00:12:17.528 --- 10.0.0.2 ping statistics --- 00:12:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.528 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:17.528 00:12:17.528 --- 10.0.0.1 ping statistics --- 00:12:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.528 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.528 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2542869 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2542869 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2542869 ']' 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.529 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 [2024-12-09 10:22:54.465754] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:17.529 [2024-12-09 10:22:54.465803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.529 [2024-12-09 10:22:54.546889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.529 [2024-12-09 10:22:54.590389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.529 [2024-12-09 10:22:54.590425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.529 [2024-12-09 10:22:54.590432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.529 [2024-12-09 10:22:54.590438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.529 [2024-12-09 10:22:54.590443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.529 [2024-12-09 10:22:54.591890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.529 [2024-12-09 10:22:54.591995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.529 [2024-12-09 10:22:54.592101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.529 [2024-12-09 10:22:54.592103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 [2024-12-09 10:22:55.356032] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 Malloc0 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.787 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 [2024-12-09 10:22:55.424482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:17.788 test case1: single bdev can't be used in multiple subsystems 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 [2024-12-09 10:22:55.452420] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:17.788 [2024-12-09 10:22:55.452439] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:17.788 [2024-12-09 10:22:55.452447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.788 request: 00:12:17.788 { 00:12:17.788 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:17.788 "namespace": { 00:12:17.788 "bdev_name": "Malloc0", 00:12:17.788 "no_auto_visible": false, 00:12:17.788 "hide_metadata": false 00:12:17.788 }, 00:12:17.788 "method": "nvmf_subsystem_add_ns", 00:12:17.788 "req_id": 1 00:12:17.788 } 00:12:17.788 Got JSON-RPC error response 00:12:17.788 response: 00:12:17.788 { 00:12:17.788 "code": -32602, 00:12:17.788 "message": "Invalid parameters" 00:12:17.788 } 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:17.788 Adding namespace failed - expected result. 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:17.788 test case2: host connect to nvmf target in multiple paths 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 [2024-12-09 10:22:55.464544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.788 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.163 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:20.098 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.098 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.098 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.098 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.098 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:22.626 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:22.626 [global] 00:12:22.626 thread=1 00:12:22.626 invalidate=1 00:12:22.626 rw=write 00:12:22.626 time_based=1 00:12:22.626 runtime=1 00:12:22.626 ioengine=libaio 00:12:22.626 direct=1 00:12:22.626 bs=4096 00:12:22.626 iodepth=1 00:12:22.626 norandommap=0 00:12:22.626 numjobs=1 00:12:22.626 00:12:22.626 verify_dump=1 00:12:22.626 verify_backlog=512 00:12:22.626 verify_state_save=0 00:12:22.626 do_verify=1 00:12:22.626 verify=crc32c-intel 00:12:22.626 [job0] 00:12:22.626 filename=/dev/nvme0n1 00:12:22.626 Could not set queue depth (nvme0n1) 00:12:22.626 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.626 fio-3.35 00:12:22.626 Starting 1 thread 00:12:23.562 00:12:23.562 job0: (groupid=0, jobs=1): err= 0: pid=2543950: Mon Dec 9 10:23:01 2024 00:12:23.562 read: IOPS=2228, BW=8915KiB/s (9129kB/s)(8924KiB/1001msec) 00:12:23.562 slat (nsec): min=8288, max=38031, avg=9265.63, stdev=1336.55 00:12:23.562 clat (usec): min=162, max=580, avg=217.56, stdev=25.01 00:12:23.562 lat (usec): min=171, max=589, avg=226.82, stdev=25.19 00:12:23.562 clat percentiles (usec): 00:12:23.562 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 208], 00:12:23.562 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:12:23.562 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 249], 95.00th=[ 253], 00:12:23.562 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 490], 99.95th=[ 545], 00:12:23.562 | 99.99th=[ 578] 00:12:23.562 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:23.562 slat (usec): min=12, max=23899, avg=22.48, stdev=472.09 00:12:23.562 clat (usec): min=115, max=491, avg=164.31, stdev=32.51 00:12:23.562 lat (usec): min=127, max=24161, avg=186.79, stdev=475.14 00:12:23.562 clat percentiles (usec): 00:12:23.562 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 143], 00:12:23.563 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:12:23.563 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 204], 95.00th=[ 249], 00:12:23.563 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 465], 00:12:23.563 | 99.99th=[ 494] 00:12:23.563 bw ( KiB/s): min=11272, max=11272, per=100.00%, avg=11272.00, stdev= 0.00, samples=1 00:12:23.563 iops : min= 2818, max= 2818, avg=2818.00, stdev= 0.00, samples=1 00:12:23.563 lat (usec) : 250=93.74%, 500=6.22%, 750=0.04% 00:12:23.563 cpu : usr=3.80%, sys=8.70%, ctx=4794, majf=0, minf=1 00:12:23.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.563 issued rwts: total=2231,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.563 00:12:23.563 Run status group 0 (all jobs): 00:12:23.563 READ: bw=8915KiB/s (9129kB/s), 8915KiB/s-8915KiB/s (9129kB/s-9129kB/s), io=8924KiB (9138kB), run=1001-1001msec 00:12:23.563 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:23.563 00:12:23.563 Disk stats (read/write): 00:12:23.563 nvme0n1: ios=2100/2307, merge=0/0, ticks=1091/337, in_queue=1428, util=98.50% 00:12:23.821 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.822 rmmod nvme_tcp 00:12:23.822 rmmod nvme_fabrics 00:12:23.822 rmmod nvme_keyring 00:12:23.822 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2542869 ']' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2542869 ']' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542869' 00:12:24.081 killing process with pid 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2542869 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.081 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.616 00:12:26.616 real 0m15.618s 00:12:26.616 user 0m35.501s 00:12:26.616 sys 0m5.381s 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:26.616 ************************************ 00:12:26.616 END TEST nvmf_nmic 00:12:26.616 ************************************ 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:26.616 ************************************ 00:12:26.616 START TEST nvmf_fio_target 00:12:26.616 ************************************ 00:12:26.616 10:23:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:26.616 * Looking for test storage... 00:12:26.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.616 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.617 --rc genhtml_branch_coverage=1 00:12:26.617 --rc genhtml_function_coverage=1 00:12:26.617 --rc genhtml_legend=1 00:12:26.617 --rc geninfo_all_blocks=1 00:12:26.617 --rc geninfo_unexecuted_blocks=1 00:12:26.617 00:12:26.617 ' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.617 --rc genhtml_branch_coverage=1 00:12:26.617 --rc genhtml_function_coverage=1 00:12:26.617 --rc genhtml_legend=1 00:12:26.617 --rc geninfo_all_blocks=1 00:12:26.617 --rc geninfo_unexecuted_blocks=1 00:12:26.617 00:12:26.617 ' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.617 --rc genhtml_branch_coverage=1 00:12:26.617 --rc genhtml_function_coverage=1 00:12:26.617 --rc genhtml_legend=1 00:12:26.617 --rc geninfo_all_blocks=1 00:12:26.617 --rc geninfo_unexecuted_blocks=1 00:12:26.617 00:12:26.617 ' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.617 --rc genhtml_branch_coverage=1 00:12:26.617 --rc genhtml_function_coverage=1 00:12:26.617 --rc genhtml_legend=1 00:12:26.617 --rc geninfo_all_blocks=1 00:12:26.617 --rc geninfo_unexecuted_blocks=1 00:12:26.617 00:12:26.617 ' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.617 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.185 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.186 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.186 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.186 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:12:33.186 00:12:33.186 --- 10.0.0.2 ping statistics --- 00:12:33.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.186 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:12:33.186 00:12:33.186 --- 10.0.0.1 ping statistics --- 00:12:33.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.186 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2547722 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2547722 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2547722 ']' 00:12:33.186 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.187 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.187 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.187 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.187 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.187 [2024-12-09 10:23:10.188206] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:33.187 [2024-12-09 10:23:10.188251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.187 [2024-12-09 10:23:10.267043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.187 [2024-12-09 10:23:10.307729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.187 [2024-12-09 10:23:10.307767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.187 [2024-12-09 10:23:10.307774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.187 [2024-12-09 10:23:10.307780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.187 [2024-12-09 10:23:10.307785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.187 [2024-12-09 10:23:10.309257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.187 [2024-12-09 10:23:10.309364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.187 [2024-12-09 10:23:10.309447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.187 [2024-12-09 10:23:10.309448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.445 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:33.702 [2024-12-09 10:23:11.246381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.702 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:33.960 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:33.960 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.218 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:34.218 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.218 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:34.218 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.477 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:34.477 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:34.735 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:34.993 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:34.993 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.251 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:35.251 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:35.251 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:35.251 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:35.510 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.769 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:35.769 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.027 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:36.027 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.027 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.284 [2024-12-09 10:23:13.908659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.284 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:36.541 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:36.799 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:37.735 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:40.319 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:40.319 [global] 00:12:40.319 thread=1 00:12:40.319 invalidate=1 00:12:40.319 rw=write 00:12:40.319 time_based=1 00:12:40.319 runtime=1 00:12:40.319 ioengine=libaio 00:12:40.319 direct=1 00:12:40.319 bs=4096 00:12:40.319 iodepth=1 00:12:40.319 norandommap=0 00:12:40.319 numjobs=1 00:12:40.319 00:12:40.319 verify_dump=1 00:12:40.319 verify_backlog=512 00:12:40.319 verify_state_save=0 00:12:40.319 do_verify=1 00:12:40.319 verify=crc32c-intel 00:12:40.319 [job0] 00:12:40.319 filename=/dev/nvme0n1 00:12:40.319 [job1] 00:12:40.319 filename=/dev/nvme0n2 00:12:40.319 [job2] 00:12:40.319 filename=/dev/nvme0n3 00:12:40.319 [job3] 00:12:40.319 filename=/dev/nvme0n4 00:12:40.319 Could not set queue depth (nvme0n1) 00:12:40.319 Could not set queue depth (nvme0n2) 00:12:40.319 Could not set queue depth (nvme0n3) 00:12:40.319 Could not set queue depth (nvme0n4) 00:12:40.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.319 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.319 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.319 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.319 fio-3.35 00:12:40.319 Starting 4 threads 00:12:41.760 00:12:41.760 job0: (groupid=0, jobs=1): err= 0: pid=2549104: Mon Dec 9 10:23:19 2024 00:12:41.760 read: IOPS=2359, BW=9439KiB/s (9665kB/s)(9448KiB/1001msec) 00:12:41.760 slat (nsec): min=4259, max=26314, avg=7430.45, stdev=1065.76 00:12:41.760 clat (usec): min=173, max=41074, avg=233.48, stdev=853.34 00:12:41.760 lat (usec): min=181, max=41079, avg=240.91, stdev=853.30 00:12:41.760 clat percentiles (usec): 00:12:41.760 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:12:41.760 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 215], 00:12:41.760 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 253], 00:12:41.760 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 433], 99.95th=[ 7242], 00:12:41.760 | 99.99th=[41157] 00:12:41.760 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:41.760 slat (nsec): min=9684, max=37281, avg=10728.58, stdev=1128.52 00:12:41.760 clat (usec): min=115, max=257, avg=153.23, stdev=15.14 00:12:41.760 lat (usec): min=125, max=294, avg=163.96, stdev=15.31 00:12:41.760 clat percentiles (usec): 00:12:41.760 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:12:41.760 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:12:41.760 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:12:41.760 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 212], 00:12:41.760 | 99.99th=[ 258] 00:12:41.760 bw ( KiB/s): min=10616, max=10616, per=40.55%, avg=10616.00, stdev= 0.00, samples=1 00:12:41.760 iops : min= 2654, max= 2654, avg=2654.00, stdev= 0.00, samples=1 00:12:41.760 lat (usec) : 250=96.85%, 500=3.11% 00:12:41.760 lat (msec) : 10=0.02%, 50=0.02% 00:12:41.760 cpu : usr=2.60%, sys=4.50%, ctx=4924, majf=0, minf=1 00:12:41.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.760 issued rwts: total=2362,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.760 job1: (groupid=0, jobs=1): err= 0: pid=2549121: Mon Dec 9 10:23:19 2024 00:12:41.760 read: IOPS=23, BW=94.4KiB/s (96.7kB/s)(96.0KiB/1017msec) 00:12:41.760 slat (nsec): min=7083, max=23242, avg=20896.92, stdev=4417.26 00:12:41.760 clat (usec): min=233, max=42048, avg=38659.41, stdev=9629.83 00:12:41.760 lat (usec): min=242, max=42070, avg=38680.30, stdev=9631.78 00:12:41.760 clat percentiles (usec): 00:12:41.760 | 1.00th=[ 233], 5.00th=[16581], 10.00th=[41157], 20.00th=[41157], 00:12:41.760 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:41.760 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:41.760 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:41.760 | 99.99th=[42206] 00:12:41.760 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:12:41.760 slat (nsec): min=9267, max=40102, avg=10933.16, stdev=3191.33 00:12:41.760 clat (usec): min=127, max=298, avg=159.89, stdev=17.98 00:12:41.761 lat (usec): min=137, max=331, avg=170.82, stdev=19.38 00:12:41.761 clat percentiles (usec): 00:12:41.761 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:12:41.761 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:12:41.761 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:12:41.761 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 297], 99.95th=[ 297], 00:12:41.761 | 99.99th=[ 297] 00:12:41.761 bw ( KiB/s): min= 4096, max= 4096, per=15.65%, avg=4096.00, stdev= 0.00, samples=1 00:12:41.761 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:41.761 lat (usec) : 250=95.34%, 500=0.37% 00:12:41.761 lat (msec) : 20=0.19%, 50=4.10% 00:12:41.761 cpu : usr=0.30%, sys=0.59%, ctx=536, majf=0, minf=1 00:12:41.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.761 job2: (groupid=0, jobs=1): err= 0: pid=2549141: Mon Dec 9 10:23:19 2024 00:12:41.761 read: IOPS=551, BW=2207KiB/s (2260kB/s)(2240KiB/1015msec) 00:12:41.761 slat (nsec): min=7484, max=32610, avg=9520.88, stdev=2726.58 00:12:41.761 clat (usec): min=182, max=41300, avg=1451.24, stdev=6949.92 00:12:41.761 lat (usec): min=190, max=41310, avg=1460.76, stdev=6949.97 00:12:41.761 clat percentiles (usec): 00:12:41.761 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:12:41.761 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:12:41.761 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 277], 95.00th=[ 289], 00:12:41.761 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:41.761 | 99.99th=[41157] 00:12:41.761 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:12:41.761 slat (nsec): min=11284, max=53631, avg=15438.37, stdev=6058.83 00:12:41.761 clat (usec): min=125, max=291, avg=170.25, stdev=16.87 00:12:41.761 lat (usec): min=144, max=304, avg=185.69, stdev=18.47 00:12:41.761 clat percentiles (usec): 00:12:41.761 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:41.761 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:12:41.761 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:12:41.761 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 285], 99.95th=[ 293], 00:12:41.761 | 99.99th=[ 293] 00:12:41.761 bw ( KiB/s): min= 8192, max= 8192, per=31.29%, avg=8192.00, stdev= 0.00, samples=1 00:12:41.761 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:41.761 lat (usec) : 250=94.51%, 500=4.36%, 750=0.06% 00:12:41.761 lat (msec) : 50=1.07% 00:12:41.761 cpu : usr=1.28%, sys=2.86%, ctx=1586, majf=0, minf=1 00:12:41.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 issued rwts: total=560,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.761 job3: (groupid=0, jobs=1): err= 0: pid=2549148: Mon Dec 9 10:23:19 2024 00:12:41.761 read: IOPS=2417, BW=9670KiB/s (9902kB/s)(9680KiB/1001msec) 00:12:41.761 slat (nsec): min=6284, max=16805, avg=7082.33, stdev=662.98 00:12:41.761 clat (usec): min=173, max=289, avg=221.79, stdev=14.49 00:12:41.761 lat (usec): min=180, max=296, avg=228.87, stdev=14.54 00:12:41.761 clat percentiles (usec): 00:12:41.761 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:12:41.761 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 225], 00:12:41.761 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 245], 00:12:41.761 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 269], 99.95th=[ 285], 00:12:41.761 | 99.99th=[ 289] 00:12:41.761 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:41.761 slat (nsec): min=9100, max=41420, avg=10143.73, stdev=1149.41 00:12:41.761 clat (usec): min=127, max=371, avg=160.07, stdev=14.64 00:12:41.761 lat (usec): min=137, max=413, avg=170.21, stdev=14.96 00:12:41.761 clat percentiles (usec): 00:12:41.761 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:12:41.761 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:12:41.761 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:12:41.761 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 235], 99.95th=[ 237], 00:12:41.761 | 99.99th=[ 371] 00:12:41.761 bw ( KiB/s): min=12192, max=12192, per=46.57%, avg=12192.00, stdev= 0.00, samples=1 00:12:41.761 iops : min= 3048, max= 3048, avg=3048.00, stdev= 0.00, samples=1 00:12:41.761 lat (usec) : 250=98.61%, 500=1.39% 00:12:41.761 cpu : usr=2.40%, sys=4.50%, ctx=4980, majf=0, minf=2 00:12:41.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.761 issued rwts: total=2420,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.761 00:12:41.761 Run status group 0 (all jobs): 00:12:41.761 READ: bw=20.6MiB/s (21.6MB/s), 94.4KiB/s-9670KiB/s (96.7kB/s-9902kB/s), io=21.0MiB (22.0MB), run=1001-1017msec 00:12:41.761 WRITE: bw=25.6MiB/s (26.8MB/s), 2014KiB/s-9.99MiB/s (2062kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1017msec 00:12:41.761 00:12:41.761 Disk stats (read/write): 00:12:41.761 nvme0n1: ios=2078/2104, merge=0/0, ticks=1448/327, in_queue=1775, util=97.39% 00:12:41.761 nvme0n2: ios=39/512, merge=0/0, ticks=730/72, in_queue=802, util=86.66% 00:12:41.761 nvme0n3: ios=605/1024, merge=0/0, ticks=1495/158, in_queue=1653, util=97.80% 00:12:41.761 nvme0n4: ios=2048/2170, merge=0/0, ticks=439/330, in_queue=769, util=89.64% 00:12:41.761 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:41.761 [global] 00:12:41.761 thread=1 00:12:41.761 invalidate=1 00:12:41.761 rw=randwrite 00:12:41.761 time_based=1 00:12:41.761 runtime=1 00:12:41.761 ioengine=libaio 00:12:41.761 direct=1 00:12:41.761 bs=4096 00:12:41.761 iodepth=1 00:12:41.761 norandommap=0 00:12:41.761 numjobs=1 00:12:41.761 00:12:41.761 verify_dump=1 00:12:41.761 verify_backlog=512 00:12:41.761 verify_state_save=0 00:12:41.761 do_verify=1 00:12:41.761 verify=crc32c-intel 00:12:41.761 [job0] 00:12:41.761 filename=/dev/nvme0n1 00:12:41.761 [job1] 00:12:41.761 filename=/dev/nvme0n2 00:12:41.761 [job2] 00:12:41.761 filename=/dev/nvme0n3 00:12:41.761 [job3] 00:12:41.761 filename=/dev/nvme0n4 00:12:41.761 Could not set queue depth (nvme0n1) 00:12:41.761 Could not set queue depth (nvme0n2) 00:12:41.761 Could not set queue depth (nvme0n3) 00:12:41.761 Could not set queue depth (nvme0n4) 00:12:41.761 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.761 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.761 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.761 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.761 fio-3.35 00:12:41.761 Starting 4 threads 00:12:43.232 00:12:43.232 job0: (groupid=0, jobs=1): err= 0: pid=2549570: Mon Dec 9 10:23:20 2024 00:12:43.232 read: IOPS=512, BW=2051KiB/s (2100kB/s)(2084KiB/1016msec) 00:12:43.232 slat (nsec): min=3715, max=35072, avg=7825.44, stdev=3314.38 00:12:43.232 clat (usec): min=169, max=42139, avg=1575.72, stdev=7299.49 00:12:43.232 lat (usec): min=176, max=42149, avg=1583.55, stdev=7300.72 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:12:43.232 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:12:43.232 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 338], 00:12:43.232 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:43.232 | 99.99th=[42206] 00:12:43.232 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:12:43.232 slat (nsec): min=4725, max=36550, avg=9105.72, stdev=3483.19 00:12:43.232 clat (usec): min=111, max=440, avg=173.95, stdev=36.18 00:12:43.232 lat (usec): min=116, max=476, avg=183.05, stdev=37.62 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 120], 5.00th=[ 128], 10.00th=[ 135], 20.00th=[ 143], 00:12:43.232 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 176], 00:12:43.232 | 70.00th=[ 184], 80.00th=[ 202], 90.00th=[ 229], 95.00th=[ 243], 00:12:43.232 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 355], 99.95th=[ 441], 00:12:43.232 | 99.99th=[ 441] 00:12:43.232 bw ( KiB/s): min= 4096, max= 4096, per=20.32%, avg=4096.00, stdev= 0.00, samples=2 00:12:43.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:12:43.232 lat (usec) : 250=91.84%, 500=6.86% 00:12:43.232 lat (msec) : 2=0.13%, 4=0.06%, 50=1.10% 00:12:43.232 cpu : usr=0.49%, sys=1.48%, ctx=1547, majf=0, minf=1 00:12:43.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.232 job1: (groupid=0, jobs=1): err= 0: pid=2549587: Mon Dec 9 10:23:20 2024 00:12:43.232 read: IOPS=1026, BW=4107KiB/s (4205kB/s)(4148KiB/1010msec) 00:12:43.232 slat (nsec): min=6377, max=28379, avg=7472.63, stdev=1957.55 00:12:43.232 clat (usec): min=155, max=41998, avg=734.05, stdev=4567.15 00:12:43.232 lat (usec): min=162, max=42023, avg=741.52, stdev=4568.80 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:12:43.232 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 225], 60.00th=[ 239], 00:12:43.232 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:12:43.232 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:12:43.232 | 99.99th=[42206] 00:12:43.232 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:12:43.232 slat (nsec): min=9412, max=40208, avg=10698.16, stdev=1846.72 00:12:43.232 clat (usec): min=107, max=547, avg=142.44, stdev=22.91 00:12:43.232 lat (usec): min=118, max=562, avg=153.14, stdev=23.52 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 128], 00:12:43.232 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:12:43.232 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:12:43.232 | 99.00th=[ 212], 99.50th=[ 253], 99.90th=[ 375], 99.95th=[ 545], 00:12:43.232 | 99.99th=[ 545] 00:12:43.232 bw ( KiB/s): min=12288, max=12288, per=60.96%, avg=12288.00, stdev= 0.00, samples=1 00:12:43.232 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:43.232 lat (usec) : 250=90.71%, 500=8.74%, 750=0.04% 00:12:43.232 lat (msec) : 50=0.51% 00:12:43.232 cpu : usr=1.39%, sys=2.28%, ctx=2574, majf=0, minf=1 00:12:43.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.232 job2: (groupid=0, jobs=1): err= 0: pid=2549619: Mon Dec 9 10:23:20 2024 00:12:43.232 read: IOPS=1772, BW=7089KiB/s (7259kB/s)(7096KiB/1001msec) 00:12:43.232 slat (nsec): min=4326, max=29240, avg=7762.82, stdev=1526.15 00:12:43.232 clat (usec): min=157, max=42288, avg=364.30, stdev=2402.38 00:12:43.232 lat (usec): min=165, max=42292, avg=372.06, stdev=2402.64 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:12:43.232 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 227], 00:12:43.232 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:12:43.232 | 99.00th=[ 400], 99.50th=[ 562], 99.90th=[41157], 99.95th=[42206], 00:12:43.232 | 99.99th=[42206] 00:12:43.232 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:43.232 slat (nsec): min=4267, max=38484, avg=10477.22, stdev=2115.81 00:12:43.232 clat (usec): min=105, max=381, avg=151.37, stdev=38.36 00:12:43.232 lat (usec): min=116, max=419, avg=161.84, stdev=38.32 00:12:43.232 clat percentiles (usec): 00:12:43.232 | 1.00th=[ 112], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 125], 00:12:43.232 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 145], 00:12:43.232 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 225], 95.00th=[ 243], 00:12:43.232 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 359], 00:12:43.232 | 99.99th=[ 383] 00:12:43.232 bw ( KiB/s): min= 6672, max= 6672, per=33.10%, avg=6672.00, stdev= 0.00, samples=1 00:12:43.232 iops : min= 1668, max= 1668, avg=1668.00, stdev= 0.00, samples=1 00:12:43.232 lat (usec) : 250=89.72%, 500=10.02%, 750=0.05% 00:12:43.232 lat (msec) : 4=0.03%, 20=0.03%, 50=0.16% 00:12:43.232 cpu : usr=2.30%, sys=3.30%, ctx=3823, majf=0, minf=1 00:12:43.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.232 issued rwts: total=1774,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.232 job3: (groupid=0, jobs=1): err= 0: pid=2549630: Mon Dec 9 10:23:20 2024 00:12:43.232 read: IOPS=413, BW=1654KiB/s (1694kB/s)(1656KiB/1001msec) 00:12:43.232 slat (nsec): min=6925, max=31615, avg=8526.64, stdev=3508.17 00:12:43.232 clat (usec): min=187, max=42115, avg=2155.14, stdev=8664.51 00:12:43.233 lat (usec): min=195, max=42123, avg=2163.66, stdev=8666.48 00:12:43.233 clat percentiles (usec): 00:12:43.233 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:12:43.233 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 258], 00:12:43.233 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 375], 95.00th=[ 644], 00:12:43.233 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:43.233 | 99.99th=[42206] 00:12:43.233 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:43.233 slat (nsec): min=9772, max=43054, avg=13019.93, stdev=3907.13 00:12:43.233 clat (usec): min=136, max=452, avg=185.59, stdev=39.50 00:12:43.233 lat (usec): min=147, max=464, avg=198.61, stdev=39.43 00:12:43.233 clat percentiles (usec): 00:12:43.233 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:12:43.233 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:12:43.233 | 70.00th=[ 188], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 247], 00:12:43.233 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 453], 99.95th=[ 453], 00:12:43.233 | 99.99th=[ 453] 00:12:43.233 bw ( KiB/s): min= 4096, max= 4096, per=20.32%, avg=4096.00, stdev= 0.00, samples=1 00:12:43.233 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:43.233 lat (usec) : 250=76.13%, 500=21.38%, 750=0.43% 00:12:43.233 lat (msec) : 50=2.05% 00:12:43.233 cpu : usr=0.70%, sys=0.80%, ctx=927, majf=0, minf=1 00:12:43.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.233 issued rwts: total=414,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:43.233 00:12:43.233 Run status group 0 (all jobs): 00:12:43.233 READ: bw=14.4MiB/s (15.1MB/s), 1654KiB/s-7089KiB/s (1694kB/s-7259kB/s), io=14.6MiB (15.3MB), run=1001-1016msec 00:12:43.233 WRITE: bw=19.7MiB/s (20.6MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1016msec 00:12:43.233 00:12:43.233 Disk stats (read/write): 00:12:43.233 nvme0n1: ios=553/1024, merge=0/0, ticks=1448/172, in_queue=1620, util=99.40% 00:12:43.233 nvme0n2: ios=1066/1536, merge=0/0, ticks=982/219, in_queue=1201, util=99.79% 00:12:43.233 nvme0n3: ios=1183/1536, merge=0/0, ticks=948/231, in_queue=1179, util=99.13% 00:12:43.233 nvme0n4: ios=443/512, merge=0/0, ticks=1448/90, in_queue=1538, util=98.67% 00:12:43.233 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:43.233 [global] 00:12:43.233 thread=1 00:12:43.233 invalidate=1 00:12:43.233 rw=write 00:12:43.233 time_based=1 00:12:43.233 runtime=1 00:12:43.233 ioengine=libaio 00:12:43.233 direct=1 00:12:43.233 bs=4096 00:12:43.233 iodepth=128 00:12:43.233 norandommap=0 00:12:43.233 numjobs=1 00:12:43.233 00:12:43.233 verify_dump=1 00:12:43.233 verify_backlog=512 00:12:43.233 verify_state_save=0 00:12:43.233 do_verify=1 00:12:43.233 verify=crc32c-intel 00:12:43.233 [job0] 00:12:43.233 filename=/dev/nvme0n1 00:12:43.233 [job1] 00:12:43.233 filename=/dev/nvme0n2 00:12:43.233 [job2] 00:12:43.233 filename=/dev/nvme0n3 00:12:43.233 [job3] 00:12:43.233 filename=/dev/nvme0n4 00:12:43.233 Could not set queue depth (nvme0n1) 00:12:43.233 Could not set queue depth (nvme0n2) 00:12:43.233 Could not set queue depth (nvme0n3) 00:12:43.233 Could not set queue depth (nvme0n4) 00:12:43.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:43.491 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:43.491 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:43.491 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:43.491 fio-3.35 00:12:43.491 Starting 4 threads 00:12:44.861 00:12:44.861 job0: (groupid=0, jobs=1): err= 0: pid=2550051: Mon Dec 9 10:23:22 2024 00:12:44.861 read: IOPS=3687, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec) 00:12:44.861 slat (nsec): min=1457, max=10659k, avg=139762.88, stdev=831159.52 00:12:44.861 clat (usec): min=1854, max=56072, avg=14775.32, stdev=12720.27 00:12:44.861 lat (usec): min=4365, max=56098, avg=14915.09, stdev=12795.47 00:12:44.861 clat percentiles (usec): 00:12:44.861 | 1.00th=[ 4883], 5.00th=[ 6718], 10.00th=[ 8029], 20.00th=[ 8455], 00:12:44.861 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:12:44.861 | 70.00th=[10421], 80.00th=[15139], 90.00th=[38011], 95.00th=[51119], 00:12:44.861 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:12:44.861 | 99.99th=[55837] 00:12:44.861 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:44.861 slat (usec): min=2, max=29234, avg=111.73, stdev=707.81 00:12:44.861 clat (usec): min=2333, max=56031, avg=17449.62, stdev=9357.53 00:12:44.861 lat (usec): min=2346, max=56035, avg=17561.35, stdev=9390.28 00:12:44.861 clat percentiles (usec): 00:12:44.861 | 1.00th=[ 3392], 5.00th=[ 5735], 10.00th=[ 8717], 20.00th=[10421], 00:12:44.861 | 30.00th=[13304], 40.00th=[16319], 50.00th=[17171], 60.00th=[17433], 00:12:44.861 | 70.00th=[17433], 80.00th=[17695], 90.00th=[30802], 95.00th=[39584], 00:12:44.861 | 99.00th=[51119], 99.50th=[51119], 99.90th=[55313], 99.95th=[55837], 00:12:44.861 | 99.99th=[55837] 00:12:44.861 bw ( KiB/s): min=16048, max=16704, per=25.69%, avg=16376.00, stdev=463.86, samples=2 00:12:44.861 iops : min= 4012, max= 4176, avg=4094.00, stdev=115.97, samples=2 00:12:44.861 lat (msec) : 2=0.01%, 4=1.15%, 10=35.02%, 20=47.31%, 50=12.68% 00:12:44.861 lat (msec) : 100=3.82% 00:12:44.861 cpu : usr=2.99%, sys=5.77%, ctx=512, majf=0, minf=1 00:12:44.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:44.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.861 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.861 job1: (groupid=0, jobs=1): err= 0: pid=2550053: Mon Dec 9 10:23:22 2024 00:12:44.861 read: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1014msec) 00:12:44.861 slat (nsec): min=1371, max=14464k, avg=107516.33, stdev=757747.94 00:12:44.861 clat (usec): min=3477, max=37137, avg=12788.91, stdev=6130.92 00:12:44.861 lat (usec): min=3487, max=37165, avg=12896.43, stdev=6198.87 00:12:44.861 clat percentiles (usec): 00:12:44.861 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 8717], 00:12:44.861 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10814], 00:12:44.861 | 70.00th=[12518], 80.00th=[18482], 90.00th=[22938], 95.00th=[24773], 00:12:44.861 | 99.00th=[31065], 99.50th=[34341], 99.90th=[34341], 99.95th=[35914], 00:12:44.861 | 99.99th=[36963] 00:12:44.861 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:12:44.861 slat (usec): min=2, max=11067, avg=143.47, stdev=737.93 00:12:44.861 clat (usec): min=1661, max=103907, avg=20146.40, stdev=19333.70 00:12:44.861 lat (usec): min=1673, max=103919, avg=20289.88, stdev=19452.91 00:12:44.861 clat percentiles (msec): 00:12:44.861 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:12:44.862 | 30.00th=[ 10], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 18], 00:12:44.862 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 36], 95.00th=[ 75], 00:12:44.862 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 105], 99.95th=[ 105], 00:12:44.862 | 99.99th=[ 105] 00:12:44.862 bw ( KiB/s): min=10824, max=21104, per=25.05%, avg=15964.00, stdev=7269.06, samples=2 00:12:44.862 iops : min= 2706, max= 5276, avg=3991.00, stdev=1817.26, samples=2 00:12:44.862 lat (msec) : 2=0.04%, 4=0.78%, 10=40.12%, 20=40.79%, 50=14.14% 00:12:44.862 lat (msec) : 100=3.66%, 250=0.47% 00:12:44.862 cpu : usr=3.85%, sys=4.15%, ctx=444, majf=0, minf=1 00:12:44.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:44.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.862 issued rwts: total=3606,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.862 job2: (groupid=0, jobs=1): err= 0: pid=2550057: Mon Dec 9 10:23:22 2024 00:12:44.862 read: IOPS=3805, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec) 00:12:44.862 slat (nsec): min=1098, max=10393k, avg=105155.87, stdev=722016.04 00:12:44.862 clat (usec): min=4498, max=26088, avg=13056.62, stdev=4369.74 00:12:44.862 lat (usec): min=4502, max=30879, avg=13161.77, stdev=4418.13 00:12:44.862 clat percentiles (usec): 00:12:44.862 | 1.00th=[ 6587], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:12:44.862 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[12256], 00:12:44.862 | 70.00th=[14746], 80.00th=[17171], 90.00th=[20055], 95.00th=[22152], 00:12:44.862 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:12:44.862 | 99.99th=[26084] 00:12:44.862 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:44.862 slat (nsec): min=1877, max=10473k, avg=137641.80, stdev=618694.25 00:12:44.862 clat (usec): min=1531, max=45862, avg=18938.54, stdev=8936.93 00:12:44.862 lat (usec): min=1545, max=45874, avg=19076.18, stdev=8971.34 00:12:44.862 clat percentiles (usec): 00:12:44.862 | 1.00th=[ 3359], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11863], 00:12:44.862 | 30.00th=[16188], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:12:44.862 | 70.00th=[17695], 80.00th=[24511], 90.00th=[34866], 95.00th=[39060], 00:12:44.862 | 99.00th=[43779], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:12:44.862 | 99.99th=[45876] 00:12:44.862 bw ( KiB/s): min=16384, max=16384, per=25.70%, avg=16384.00, stdev= 0.00, samples=2 00:12:44.862 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:12:44.862 lat (msec) : 2=0.06%, 4=0.58%, 10=14.92%, 20=67.93%, 50=16.51% 00:12:44.862 cpu : usr=2.69%, sys=5.17%, ctx=511, majf=0, minf=2 00:12:44.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:44.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.862 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.862 job3: (groupid=0, jobs=1): err= 0: pid=2550058: Mon Dec 9 10:23:22 2024 00:12:44.862 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:12:44.862 slat (nsec): min=1101, max=31355k, avg=135264.06, stdev=1020791.52 00:12:44.862 clat (usec): min=5297, max=68497, avg=15503.56, stdev=8665.83 00:12:44.862 lat (usec): min=5302, max=73875, avg=15638.83, stdev=8754.19 00:12:44.862 clat percentiles (usec): 00:12:44.862 | 1.00th=[ 5538], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:12:44.862 | 30.00th=[10290], 40.00th=[12256], 50.00th=[13173], 60.00th=[15926], 00:12:44.862 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18744], 95.00th=[36439], 00:12:44.862 | 99.00th=[44303], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:12:44.862 | 99.99th=[68682] 00:12:44.862 write: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1014msec); 0 zone resets 00:12:44.862 slat (usec): min=2, max=13967, avg=127.03, stdev=793.76 00:12:44.862 clat (usec): min=3068, max=74050, avg=17917.58, stdev=12941.86 00:12:44.862 lat (usec): min=3078, max=74062, avg=18044.61, stdev=13001.83 00:12:44.862 clat percentiles (usec): 00:12:44.862 | 1.00th=[ 5080], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9110], 00:12:44.862 | 30.00th=[10552], 40.00th=[12649], 50.00th=[13829], 60.00th=[16188], 00:12:44.862 | 70.00th=[17171], 80.00th=[20841], 90.00th=[33817], 95.00th=[44303], 00:12:44.862 | 99.00th=[71828], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:12:44.862 | 99.99th=[73925] 00:12:44.862 bw ( KiB/s): min=11528, max=18416, per=23.49%, avg=14972.00, stdev=4870.55, samples=2 00:12:44.862 iops : min= 2882, max= 4604, avg=3743.00, stdev=1217.64, samples=2 00:12:44.862 lat (msec) : 4=0.27%, 10=24.62%, 20=59.22%, 50=13.42%, 100=2.48% 00:12:44.862 cpu : usr=2.47%, sys=5.13%, ctx=287, majf=0, minf=1 00:12:44.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:44.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.862 issued rwts: total=3584,3870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.862 00:12:44.862 Run status group 0 (all jobs): 00:12:44.862 READ: bw=56.7MiB/s (59.5MB/s), 13.8MiB/s-14.9MiB/s (14.5MB/s-15.6MB/s), io=57.5MiB (60.3MB), run=1006-1014msec 00:12:44.862 WRITE: bw=62.2MiB/s (65.3MB/s), 14.9MiB/s-15.9MiB/s (15.6MB/s-16.7MB/s), io=63.1MiB (66.2MB), run=1006-1014msec 00:12:44.862 00:12:44.862 Disk stats (read/write): 00:12:44.862 nvme0n1: ios=2714/3072, merge=0/0, ticks=41088/52529, in_queue=93617, util=96.69% 00:12:44.862 nvme0n2: ios=3184/3584, merge=0/0, ticks=40383/58520, in_queue=98903, util=94.44% 00:12:44.862 nvme0n3: ios=3063/3079, merge=0/0, ticks=38259/60907, in_queue=99166, util=87.53% 00:12:44.862 nvme0n4: ios=2560/2943, merge=0/0, ticks=20816/25264, in_queue=46080, util=89.07% 00:12:44.862 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:44.862 [global] 00:12:44.862 thread=1 00:12:44.862 invalidate=1 00:12:44.862 rw=randwrite 00:12:44.862 time_based=1 00:12:44.862 runtime=1 00:12:44.862 ioengine=libaio 00:12:44.862 direct=1 00:12:44.862 bs=4096 00:12:44.862 iodepth=128 00:12:44.862 norandommap=0 00:12:44.862 numjobs=1 00:12:44.862 00:12:44.862 verify_dump=1 00:12:44.862 verify_backlog=512 00:12:44.862 verify_state_save=0 00:12:44.862 do_verify=1 00:12:44.862 verify=crc32c-intel 00:12:44.862 [job0] 00:12:44.862 filename=/dev/nvme0n1 00:12:44.862 [job1] 00:12:44.862 filename=/dev/nvme0n2 00:12:44.862 [job2] 00:12:44.862 filename=/dev/nvme0n3 00:12:44.862 [job3] 00:12:44.862 filename=/dev/nvme0n4 00:12:44.862 Could not set queue depth (nvme0n1) 00:12:44.862 Could not set queue depth (nvme0n2) 00:12:44.862 Could not set queue depth (nvme0n3) 00:12:44.862 Could not set queue depth (nvme0n4) 00:12:45.119 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:45.119 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:45.119 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:45.119 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:45.119 fio-3.35 00:12:45.119 Starting 4 threads 00:12:46.489 00:12:46.489 job0: (groupid=0, jobs=1): err= 0: pid=2550429: Mon Dec 9 10:23:23 2024 00:12:46.489 read: IOPS=5499, BW=21.5MiB/s (22.5MB/s)(22.5MiB/1047msec) 00:12:46.489 slat (nsec): min=1309, max=10792k, avg=77618.31, stdev=493041.51 00:12:46.489 clat (usec): min=1799, max=63776, avg=11530.07, stdev=8215.38 00:12:46.489 lat (usec): min=1805, max=63780, avg=11607.69, stdev=8228.99 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 3326], 5.00th=[ 7177], 10.00th=[ 8160], 20.00th=[ 9372], 00:12:46.489 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:12:46.489 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12256], 95.00th=[14091], 00:12:46.489 | 99.00th=[58459], 99.50th=[61604], 99.90th=[63177], 99.95th=[63701], 00:12:46.489 | 99.99th=[63701] 00:12:46.489 write: IOPS=5868, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1047msec); 0 zone resets 00:12:46.489 slat (usec): min=2, max=13286, avg=75.33, stdev=449.68 00:12:46.489 clat (usec): min=1141, max=63781, avg=10817.73, stdev=6603.29 00:12:46.489 lat (usec): min=1151, max=63784, avg=10893.06, stdev=6653.97 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 2966], 5.00th=[ 3425], 10.00th=[ 6652], 20.00th=[ 8455], 00:12:46.489 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:12:46.489 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13829], 95.00th=[19006], 00:12:46.489 | 99.00th=[46400], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:12:46.489 | 99.99th=[63701] 00:12:46.489 bw ( KiB/s): min=23160, max=25924, per=35.25%, avg=24542.00, stdev=1954.44, samples=2 00:12:46.489 iops : min= 5790, max= 6481, avg=6135.50, stdev=488.61, samples=2 00:12:46.489 lat (msec) : 2=0.19%, 4=3.86%, 10=46.36%, 20=45.40%, 50=2.76% 00:12:46.489 lat (msec) : 100=1.41% 00:12:46.489 cpu : usr=3.54%, sys=5.93%, ctx=692, majf=0, minf=1 00:12:46.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:46.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.489 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.489 job1: (groupid=0, jobs=1): err= 0: pid=2550430: Mon Dec 9 10:23:23 2024 00:12:46.489 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:12:46.489 slat (nsec): min=1068, max=22217k, avg=134254.67, stdev=1136691.24 00:12:46.489 clat (usec): min=3673, max=49453, avg=17662.36, stdev=7937.76 00:12:46.489 lat (usec): min=3683, max=49459, avg=17796.61, stdev=8030.87 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 4948], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[12387], 00:12:46.489 | 30.00th=[12649], 40.00th=[13042], 50.00th=[14746], 60.00th=[18482], 00:12:46.489 | 70.00th=[20317], 80.00th=[21627], 90.00th=[28967], 95.00th=[31065], 00:12:46.489 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:12:46.489 | 99.99th=[49546] 00:12:46.489 write: IOPS=3103, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1004msec); 0 zone resets 00:12:46.489 slat (usec): min=2, max=27904, avg=172.96, stdev=1207.58 00:12:46.489 clat (usec): min=1697, max=85257, avg=23387.49, stdev=15891.82 00:12:46.489 lat (usec): min=1705, max=85266, avg=23560.45, stdev=15969.68 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 3130], 5.00th=[ 4424], 10.00th=[ 8848], 20.00th=[11076], 00:12:46.489 | 30.00th=[13960], 40.00th=[15795], 50.00th=[17433], 60.00th=[21103], 00:12:46.489 | 70.00th=[27395], 80.00th=[35914], 90.00th=[44827], 95.00th=[54264], 00:12:46.489 | 99.00th=[78119], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:12:46.489 | 99.99th=[85459] 00:12:46.489 bw ( KiB/s): min=12288, max=12336, per=17.68%, avg=12312.00, stdev=33.94, samples=2 00:12:46.489 iops : min= 3072, max= 3084, avg=3078.00, stdev= 8.49, samples=2 00:12:46.489 lat (msec) : 2=0.06%, 4=2.08%, 10=8.95%, 20=51.41%, 50=33.32% 00:12:46.489 lat (msec) : 100=4.17% 00:12:46.489 cpu : usr=1.89%, sys=3.59%, ctx=275, majf=0, minf=1 00:12:46.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:46.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.489 issued rwts: total=3072,3116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.489 job2: (groupid=0, jobs=1): err= 0: pid=2550431: Mon Dec 9 10:23:23 2024 00:12:46.489 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:12:46.489 slat (nsec): min=1190, max=12822k, avg=84489.31, stdev=701963.89 00:12:46.489 clat (usec): min=1619, max=27069, avg=13320.54, stdev=3419.02 00:12:46.489 lat (usec): min=4339, max=28810, avg=13405.02, stdev=3465.59 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:12:46.489 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12518], 60.00th=[13698], 00:12:46.489 | 70.00th=[14353], 80.00th=[15533], 90.00th=[18744], 95.00th=[19530], 00:12:46.489 | 99.00th=[23462], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:12:46.489 | 99.99th=[27132] 00:12:46.489 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:12:46.489 slat (nsec): min=1972, max=15715k, avg=71655.91, stdev=573623.97 00:12:46.489 clat (usec): min=2057, max=62795, avg=11993.15, stdev=6115.89 00:12:46.489 lat (usec): min=2066, max=62801, avg=12064.80, stdev=6155.97 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 3294], 5.00th=[ 5342], 10.00th=[ 6521], 20.00th=[ 8356], 00:12:46.489 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11469], 60.00th=[11863], 00:12:46.489 | 70.00th=[13042], 80.00th=[13829], 90.00th=[17695], 95.00th=[20055], 00:12:46.489 | 99.00th=[39584], 99.50th=[57410], 99.90th=[62653], 99.95th=[62653], 00:12:46.489 | 99.99th=[62653] 00:12:46.489 bw ( KiB/s): min=20368, max=20592, per=29.41%, avg=20480.00, stdev=158.39, samples=2 00:12:46.489 iops : min= 5092, max= 5148, avg=5120.00, stdev=39.60, samples=2 00:12:46.489 lat (msec) : 2=0.01%, 4=1.03%, 10=20.74%, 20=73.75%, 50=4.10% 00:12:46.489 lat (msec) : 100=0.38% 00:12:46.489 cpu : usr=3.38%, sys=5.96%, ctx=418, majf=0, minf=1 00:12:46.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:46.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.489 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.489 job3: (groupid=0, jobs=1): err= 0: pid=2550432: Mon Dec 9 10:23:23 2024 00:12:46.489 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:12:46.489 slat (nsec): min=1331, max=27045k, avg=135979.12, stdev=1058546.46 00:12:46.489 clat (usec): min=4918, max=64709, avg=16983.14, stdev=8978.33 00:12:46.489 lat (usec): min=4924, max=64718, avg=17119.12, stdev=9048.37 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[12518], 00:12:46.489 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[14484], 00:12:46.489 | 70.00th=[16450], 80.00th=[22152], 90.00th=[26870], 95.00th=[33817], 00:12:46.489 | 99.00th=[60556], 99.50th=[62653], 99.90th=[64750], 99.95th=[64750], 00:12:46.489 | 99.99th=[64750] 00:12:46.489 write: IOPS=3811, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1009msec); 0 zone resets 00:12:46.489 slat (usec): min=2, max=27066, avg=116.73, stdev=924.99 00:12:46.489 clat (usec): min=688, max=64680, avg=17415.72, stdev=9438.00 00:12:46.489 lat (usec): min=696, max=64683, avg=17532.45, stdev=9506.34 00:12:46.489 clat percentiles (usec): 00:12:46.489 | 1.00th=[ 4948], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[11469], 00:12:46.489 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14484], 60.00th=[16712], 00:12:46.489 | 70.00th=[19268], 80.00th=[20317], 90.00th=[31851], 95.00th=[39584], 00:12:46.489 | 99.00th=[50070], 99.50th=[52167], 99.90th=[52167], 99.95th=[64750], 00:12:46.489 | 99.99th=[64750] 00:12:46.489 bw ( KiB/s): min=14592, max=15160, per=21.36%, avg=14876.00, stdev=401.64, samples=2 00:12:46.489 iops : min= 3648, max= 3790, avg=3719.00, stdev=100.41, samples=2 00:12:46.489 lat (usec) : 750=0.05% 00:12:46.489 lat (msec) : 4=0.34%, 10=10.05%, 20=67.07%, 50=20.96%, 100=1.53% 00:12:46.489 cpu : usr=3.87%, sys=3.87%, ctx=329, majf=0, minf=2 00:12:46.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:46.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.489 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.489 00:12:46.489 Run status group 0 (all jobs): 00:12:46.489 READ: bw=64.9MiB/s (68.1MB/s), 12.0MiB/s-21.5MiB/s (12.5MB/s-22.5MB/s), io=68.0MiB (71.3MB), run=1004-1047msec 00:12:46.489 WRITE: bw=68.0MiB/s (71.3MB/s), 12.1MiB/s-22.9MiB/s (12.7MB/s-24.0MB/s), io=71.2MiB (74.7MB), run=1004-1047msec 00:12:46.489 00:12:46.489 Disk stats (read/write): 00:12:46.489 nvme0n1: ios=5062/5120, merge=0/0, ticks=28458/26962, in_queue=55420, util=85.16% 00:12:46.489 nvme0n2: ios=2572/2567, merge=0/0, ticks=44807/45843, in_queue=90650, util=88.54% 00:12:46.489 nvme0n3: ios=4151/4407, merge=0/0, ticks=52937/51221, in_queue=104158, util=93.08% 00:12:46.489 nvme0n4: ios=2737/3072, merge=0/0, ticks=37627/39096, in_queue=76723, util=93.35% 00:12:46.489 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:46.489 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2550658 00:12:46.489 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:46.490 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:46.490 [global] 00:12:46.490 thread=1 00:12:46.490 invalidate=1 00:12:46.490 rw=read 00:12:46.490 time_based=1 00:12:46.490 runtime=10 00:12:46.490 ioengine=libaio 00:12:46.490 direct=1 00:12:46.490 bs=4096 00:12:46.490 iodepth=1 00:12:46.490 norandommap=1 00:12:46.490 numjobs=1 00:12:46.490 00:12:46.490 [job0] 00:12:46.490 filename=/dev/nvme0n1 00:12:46.490 [job1] 00:12:46.490 filename=/dev/nvme0n2 00:12:46.490 [job2] 00:12:46.490 filename=/dev/nvme0n3 00:12:46.490 [job3] 00:12:46.490 filename=/dev/nvme0n4 00:12:46.490 Could not set queue depth (nvme0n1) 00:12:46.490 Could not set queue depth (nvme0n2) 00:12:46.490 Could not set queue depth (nvme0n3) 00:12:46.490 Could not set queue depth (nvme0n4) 00:12:46.490 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:46.490 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:46.490 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:46.490 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:46.490 fio-3.35 00:12:46.490 Starting 4 threads 00:12:49.825 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:49.825 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:49.825 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:12:49.825 fio: pid=2550807, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:49.825 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46481408, buflen=4096 00:12:49.825 fio: pid=2550806, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:49.825 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:49.825 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:49.825 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:49.825 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:49.825 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=438272, buflen=4096 00:12:49.825 fio: pid=2550804, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:50.082 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=63668224, buflen=4096 00:12:50.082 fio: pid=2550805, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:50.082 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:50.082 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:50.082 00:12:50.082 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2550804: Mon Dec 9 10:23:27 2024 00:12:50.082 read: IOPS=33, BW=134KiB/s (138kB/s)(428KiB/3185msec) 00:12:50.082 slat (usec): min=6, max=19772, avg=199.52, stdev=1901.07 00:12:50.082 clat (usec): min=209, max=42472, avg=29359.56, stdev=18661.64 00:12:50.082 lat (usec): min=217, max=60964, avg=29560.74, stdev=18878.02 00:12:50.083 clat percentiles (usec): 00:12:50.083 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 269], 00:12:50.083 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:12:50.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:12:50.083 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:50.083 | 99.99th=[42730] 00:12:50.083 bw ( KiB/s): min= 96, max= 208, per=0.42%, avg=134.50, stdev=44.59, samples=6 00:12:50.083 iops : min= 24, max= 52, avg=33.50, stdev=11.20, samples=6 00:12:50.083 lat (usec) : 250=12.96%, 500=14.81%, 750=0.93% 00:12:50.083 lat (msec) : 50=70.37% 00:12:50.083 cpu : usr=0.09%, sys=0.00%, ctx=110, majf=0, minf=1 00:12:50.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.083 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2550805: Mon Dec 9 10:23:27 2024 00:12:50.083 read: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(60.7MiB/3379msec) 00:12:50.083 slat (usec): min=6, max=15696, avg=10.59, stdev=174.80 00:12:50.083 clat (usec): min=157, max=1894, avg=203.52, stdev=23.96 00:12:50.083 lat (usec): min=164, max=15972, avg=214.11, stdev=177.43 00:12:50.083 clat percentiles (usec): 00:12:50.083 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:12:50.083 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:12:50.083 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:12:50.083 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 310], 00:12:50.083 | 99.99th=[ 1631] 00:12:50.083 bw ( KiB/s): min=18320, max=18656, per=57.84%, avg=18532.00, stdev=140.10, samples=6 00:12:50.083 iops : min= 4580, max= 4664, avg=4633.00, stdev=35.03, samples=6 00:12:50.083 lat (usec) : 250=98.97%, 500=1.00%, 750=0.01% 00:12:50.083 lat (msec) : 2=0.01% 00:12:50.083 cpu : usr=3.05%, sys=6.72%, ctx=15550, majf=0, minf=2 00:12:50.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 issued rwts: total=15545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.083 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2550806: Mon Dec 9 10:23:27 2024 00:12:50.083 read: IOPS=3840, BW=15.0MiB/s (15.7MB/s)(44.3MiB/2955msec) 00:12:50.083 slat (nsec): min=7223, max=37036, avg=8131.41, stdev=911.08 00:12:50.083 clat (usec): min=177, max=525, avg=249.01, stdev=13.48 00:12:50.083 lat (usec): min=185, max=558, avg=257.14, stdev=13.48 00:12:50.083 clat percentiles (usec): 00:12:50.083 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:12:50.083 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:12:50.083 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 269], 00:12:50.083 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 465], 00:12:50.083 | 99.99th=[ 486] 00:12:50.083 bw ( KiB/s): min=15400, max=15520, per=48.36%, avg=15494.40, stdev=52.88, samples=5 00:12:50.083 iops : min= 3850, max= 3880, avg=3873.60, stdev=13.22, samples=5 00:12:50.083 lat (usec) : 250=54.24%, 500=45.74%, 750=0.01% 00:12:50.083 cpu : usr=1.22%, sys=3.72%, ctx=11349, majf=0, minf=2 00:12:50.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 issued rwts: total=11349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.083 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2550807: Mon Dec 9 10:23:27 2024 00:12:50.083 read: IOPS=24, BW=97.7KiB/s (100kB/s)(268KiB/2742msec) 00:12:50.083 slat (nsec): min=9489, max=34502, avg=22118.66, stdev=2216.63 00:12:50.083 clat (usec): min=436, max=42073, avg=40575.65, stdev=4994.49 00:12:50.083 lat (usec): min=470, max=42094, avg=40597.78, stdev=4992.93 00:12:50.083 clat percentiles (usec): 00:12:50.083 | 1.00th=[ 437], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:50.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:50.083 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:50.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:50.083 | 99.99th=[42206] 00:12:50.083 bw ( KiB/s): min= 96, max= 104, per=0.30%, avg=97.60, stdev= 3.58, samples=5 00:12:50.083 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:12:50.083 lat (usec) : 500=1.47% 00:12:50.083 lat (msec) : 50=97.06% 00:12:50.083 cpu : usr=0.07%, sys=0.00%, ctx=68, majf=0, minf=2 00:12:50.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.083 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.083 00:12:50.083 Run status group 0 (all jobs): 00:12:50.083 READ: bw=31.3MiB/s (32.8MB/s), 97.7KiB/s-18.0MiB/s (100kB/s-18.8MB/s), io=106MiB (111MB), run=2742-3379msec 00:12:50.083 00:12:50.083 Disk stats (read/write): 00:12:50.083 nvme0n1: ios=104/0, merge=0/0, ticks=3061/0, in_queue=3061, util=95.19% 00:12:50.083 nvme0n2: ios=15538/0, merge=0/0, ticks=2962/0, in_queue=2962, util=95.29% 00:12:50.083 nvme0n3: ios=11052/0, merge=0/0, ticks=2678/0, in_queue=2678, util=96.55% 00:12:50.083 nvme0n4: ios=64/0, merge=0/0, ticks=2597/0, in_queue=2597, util=96.49% 00:12:50.340 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:50.340 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:50.597 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:50.597 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:50.853 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:50.853 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:50.853 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:51.111 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:51.111 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:51.111 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2550658 00:12:51.111 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:51.111 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:51.369 nvmf hotplug test: fio failed as expected 00:12:51.369 10:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.627 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.627 rmmod nvme_tcp 00:12:51.627 rmmod nvme_fabrics 00:12:51.628 rmmod nvme_keyring 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2547722 ']' 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2547722 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2547722 ']' 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2547722 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547722 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547722' 00:12:51.628 killing process with pid 2547722 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2547722 00:12:51.628 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2547722 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.887 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.792 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:53.792 00:12:53.792 real 0m27.542s 00:12:53.792 user 1m49.506s 00:12:53.792 sys 0m8.796s 00:12:53.792 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.792 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 ************************************ 00:12:53.792 END TEST nvmf_fio_target 00:12:53.792 ************************************ 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.051 ************************************ 00:12:54.051 START TEST nvmf_bdevio 00:12:54.051 ************************************ 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:54.051 * Looking for test storage... 00:12:54.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.051 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:54.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.052 --rc genhtml_branch_coverage=1 00:12:54.052 --rc genhtml_function_coverage=1 00:12:54.052 --rc genhtml_legend=1 00:12:54.052 --rc geninfo_all_blocks=1 00:12:54.052 --rc geninfo_unexecuted_blocks=1 00:12:54.052 00:12:54.052 ' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:54.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.052 --rc genhtml_branch_coverage=1 00:12:54.052 --rc genhtml_function_coverage=1 00:12:54.052 --rc genhtml_legend=1 00:12:54.052 --rc geninfo_all_blocks=1 00:12:54.052 --rc geninfo_unexecuted_blocks=1 00:12:54.052 00:12:54.052 ' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:54.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.052 --rc genhtml_branch_coverage=1 00:12:54.052 --rc genhtml_function_coverage=1 00:12:54.052 --rc genhtml_legend=1 00:12:54.052 --rc geninfo_all_blocks=1 00:12:54.052 --rc geninfo_unexecuted_blocks=1 00:12:54.052 00:12:54.052 ' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:54.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.052 --rc genhtml_branch_coverage=1 00:12:54.052 --rc genhtml_function_coverage=1 00:12:54.052 --rc genhtml_legend=1 00:12:54.052 --rc geninfo_all_blocks=1 00:12:54.052 --rc geninfo_unexecuted_blocks=1 00:12:54.052 00:12:54.052 ' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.052 10:23:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.619 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.619 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.619 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.619 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.619 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:13:00.620 00:13:00.620 --- 10.0.0.2 ping statistics --- 00:13:00.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.620 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:13:00.620 00:13:00.620 --- 10.0.0.1 ping statistics --- 00:13:00.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.620 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2555088 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2555088 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2555088 ']' 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.620 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 [2024-12-09 10:23:37.815469] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:00.620 [2024-12-09 10:23:37.815516] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.620 [2024-12-09 10:23:37.897777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.620 [2024-12-09 10:23:37.940246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.620 [2024-12-09 10:23:37.940283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.620 [2024-12-09 10:23:37.940290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.620 [2024-12-09 10:23:37.940296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.620 [2024-12-09 10:23:37.940302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.620 [2024-12-09 10:23:37.941791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:00.620 [2024-12-09 10:23:37.941898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:00.620 [2024-12-09 10:23:37.942004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.620 [2024-12-09 10:23:37.942004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 [2024-12-09 10:23:38.079732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 Malloc0 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:00.620 [2024-12-09 10:23:38.138358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:00.620 { 00:13:00.620 "params": { 00:13:00.620 "name": "Nvme$subsystem", 00:13:00.620 "trtype": "$TEST_TRANSPORT", 00:13:00.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.620 "adrfam": "ipv4", 00:13:00.620 "trsvcid": "$NVMF_PORT", 00:13:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.620 "hdgst": ${hdgst:-false}, 00:13:00.620 "ddgst": ${ddgst:-false} 00:13:00.620 }, 00:13:00.620 "method": "bdev_nvme_attach_controller" 00:13:00.620 } 00:13:00.620 EOF 00:13:00.620 )") 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:00.620 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:00.620 "params": { 00:13:00.620 "name": "Nvme1", 00:13:00.620 "trtype": "tcp", 00:13:00.620 "traddr": "10.0.0.2", 00:13:00.620 "adrfam": "ipv4", 00:13:00.620 "trsvcid": "4420", 00:13:00.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.620 "hdgst": false, 00:13:00.620 "ddgst": false 00:13:00.620 }, 00:13:00.620 "method": "bdev_nvme_attach_controller" 00:13:00.620 }' 00:13:00.620 [2024-12-09 10:23:38.188317] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:00.620 [2024-12-09 10:23:38.188357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555302 ] 00:13:00.620 [2024-12-09 10:23:38.262208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.620 [2024-12-09 10:23:38.305679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.620 [2024-12-09 10:23:38.305786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.620 [2024-12-09 10:23:38.305787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.878 I/O targets: 00:13:00.878 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:00.878 00:13:00.878 00:13:00.878 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.878 http://cunit.sourceforge.net/ 00:13:00.878 00:13:00.878 00:13:00.878 Suite: bdevio tests on: Nvme1n1 00:13:01.136 Test: blockdev write read block ...passed 00:13:01.136 Test: blockdev write zeroes read block ...passed 00:13:01.136 Test: blockdev write zeroes read no split ...passed 00:13:01.136 Test: blockdev write zeroes read split ...passed 00:13:01.136 Test: blockdev write zeroes read split partial ...passed 00:13:01.136 Test: blockdev reset ...[2024-12-09 10:23:38.696637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:01.136 [2024-12-09 10:23:38.696693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x570f30 (9): Bad file descriptor 00:13:01.136 [2024-12-09 10:23:38.840605] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:01.136 passed 00:13:01.136 Test: blockdev write read 8 blocks ...passed 00:13:01.136 Test: blockdev write read size > 128k ...passed 00:13:01.136 Test: blockdev write read invalid size ...passed 00:13:01.399 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:01.399 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:01.399 Test: blockdev write read max offset ...passed 00:13:01.399 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:01.399 Test: blockdev writev readv 8 blocks ...passed 00:13:01.399 Test: blockdev writev readv 30 x 1block ...passed 00:13:01.399 Test: blockdev writev readv block ...passed 00:13:01.399 Test: blockdev writev readv size > 128k ...passed 00:13:01.399 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:01.399 Test: blockdev comparev and writev ...[2024-12-09 10:23:39.094541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.094567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.094581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.094589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.094824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.094835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.094846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.094854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.095090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.095100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.095111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.095123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.095345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.095355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:01.399 [2024-12-09 10:23:39.095366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.399 [2024-12-09 10:23:39.095373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:01.657 passed 00:13:01.657 Test: blockdev nvme passthru rw ...passed 00:13:01.657 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:23:39.177103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.657 [2024-12-09 10:23:39.177120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:01.657 [2024-12-09 10:23:39.177226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.657 [2024-12-09 10:23:39.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:01.657 [2024-12-09 10:23:39.177331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.657 [2024-12-09 10:23:39.177341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:01.657 [2024-12-09 10:23:39.177437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.657 [2024-12-09 10:23:39.177446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:01.657 passed 00:13:01.657 Test: blockdev nvme admin passthru ...passed 00:13:01.657 Test: blockdev copy ...passed 00:13:01.657 00:13:01.657 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.657 suites 1 1 n/a 0 0 00:13:01.658 tests 23 23 23 0 0 00:13:01.658 asserts 152 152 152 0 n/a 00:13:01.658 00:13:01.658 Elapsed time = 1.303 seconds 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.658 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.915 rmmod nvme_tcp 00:13:01.915 rmmod nvme_fabrics 00:13:01.915 rmmod nvme_keyring 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2555088 ']' 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2555088 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2555088 ']' 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2555088 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555088 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555088' 00:13:01.915 killing process with pid 2555088 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2555088 00:13:01.915 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2555088 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.173 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.075 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.075 00:13:04.075 real 0m10.218s 00:13:04.075 user 0m11.172s 00:13:04.075 sys 0m5.054s 00:13:04.075 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.075 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:04.075 ************************************ 00:13:04.075 END TEST nvmf_bdevio 00:13:04.075 ************************************ 00:13:04.333 10:23:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:04.333 00:13:04.333 real 4m36.871s 00:13:04.333 user 10m25.056s 00:13:04.333 sys 1m38.140s 00:13:04.333 10:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.333 10:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:04.333 ************************************ 00:13:04.333 END TEST nvmf_target_core 00:13:04.333 ************************************ 00:13:04.333 10:23:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:04.333 10:23:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.333 10:23:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.333 10:23:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.333 ************************************ 00:13:04.333 START TEST nvmf_target_extra 00:13:04.334 ************************************ 00:13:04.334 10:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:04.334 * Looking for test storage... 00:13:04.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:04.334 10:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.334 10:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.334 10:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:04.334 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.593 --rc genhtml_branch_coverage=1 00:13:04.593 --rc genhtml_function_coverage=1 00:13:04.593 --rc genhtml_legend=1 00:13:04.593 --rc geninfo_all_blocks=1 00:13:04.593 --rc geninfo_unexecuted_blocks=1 00:13:04.593 00:13:04.593 ' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.593 --rc genhtml_branch_coverage=1 00:13:04.593 --rc genhtml_function_coverage=1 00:13:04.593 --rc genhtml_legend=1 00:13:04.593 --rc geninfo_all_blocks=1 00:13:04.593 --rc geninfo_unexecuted_blocks=1 00:13:04.593 00:13:04.593 ' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.593 --rc genhtml_branch_coverage=1 00:13:04.593 --rc genhtml_function_coverage=1 00:13:04.593 --rc genhtml_legend=1 00:13:04.593 --rc geninfo_all_blocks=1 00:13:04.593 --rc geninfo_unexecuted_blocks=1 00:13:04.593 00:13:04.593 ' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.593 --rc genhtml_branch_coverage=1 00:13:04.593 --rc genhtml_function_coverage=1 00:13:04.593 --rc genhtml_legend=1 00:13:04.593 --rc geninfo_all_blocks=1 00:13:04.593 --rc geninfo_unexecuted_blocks=1 00:13:04.593 00:13:04.593 ' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.593 10:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.594 ************************************ 00:13:04.594 START TEST nvmf_example 00:13:04.594 ************************************ 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:04.594 * Looking for test storage... 00:13:04.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.594 --rc genhtml_branch_coverage=1 00:13:04.594 --rc genhtml_function_coverage=1 00:13:04.594 --rc genhtml_legend=1 00:13:04.594 --rc geninfo_all_blocks=1 00:13:04.594 --rc geninfo_unexecuted_blocks=1 00:13:04.594 00:13:04.594 ' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.594 --rc genhtml_branch_coverage=1 00:13:04.594 --rc genhtml_function_coverage=1 00:13:04.594 --rc genhtml_legend=1 00:13:04.594 --rc geninfo_all_blocks=1 00:13:04.594 --rc geninfo_unexecuted_blocks=1 00:13:04.594 00:13:04.594 ' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.594 --rc genhtml_branch_coverage=1 00:13:04.594 --rc genhtml_function_coverage=1 00:13:04.594 --rc genhtml_legend=1 00:13:04.594 --rc geninfo_all_blocks=1 00:13:04.594 --rc geninfo_unexecuted_blocks=1 00:13:04.594 00:13:04.594 ' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.594 --rc genhtml_branch_coverage=1 00:13:04.594 --rc genhtml_function_coverage=1 00:13:04.594 --rc genhtml_legend=1 00:13:04.594 --rc geninfo_all_blocks=1 00:13:04.594 --rc geninfo_unexecuted_blocks=1 00:13:04.594 00:13:04.594 ' 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:04.594 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.852 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.853 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.417 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:11.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:11.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:11.418 Found net devices under 0000:86:00.0: cvl_0_0 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:11.418 Found net devices under 0000:86:00.1: cvl_0_1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:13:11.418 00:13:11.418 --- 10.0.0.2 ping statistics --- 00:13:11.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.418 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:13:11.418 00:13:11.418 --- 10.0.0.1 ping statistics --- 00:13:11.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.418 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2559125 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2559125 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2559125 ']' 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.418 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.419 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.419 10:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:11.676 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:23.863 Initializing NVMe Controllers 00:13:23.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.863 Initialization complete. Launching workers. 00:13:23.863 ======================================================== 00:13:23.863 Latency(us) 00:13:23.863 Device Information : IOPS MiB/s Average min max 00:13:23.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18245.23 71.27 3507.30 559.76 15516.79 00:13:23.863 ======================================================== 00:13:23.863 Total : 18245.23 71.27 3507.30 559.76 15516.79 00:13:23.863 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.863 rmmod nvme_tcp 00:13:23.863 rmmod nvme_fabrics 00:13:23.863 rmmod nvme_keyring 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2559125 ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2559125 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2559125 ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2559125 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559125 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559125' 00:13:23.863 killing process with pid 2559125 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2559125 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2559125 00:13:23.863 nvmf threads initialize successfully 00:13:23.863 bdev subsystem init successfully 00:13:23.863 created a nvmf target service 00:13:23.863 create targets's poll groups done 00:13:23.863 all subsystems of target started 00:13:23.863 nvmf target is running 00:13:23.863 all subsystems of target stopped 00:13:23.863 destroy targets's poll groups done 00:13:23.863 destroyed the nvmf target service 00:13:23.863 bdev subsystem finish successfully 00:13:23.863 nvmf threads destroy successfully 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.863 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.431 00:13:24.431 real 0m19.939s 00:13:24.431 user 0m46.583s 00:13:24.431 sys 0m5.988s 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.431 ************************************ 00:13:24.431 END TEST nvmf_example 00:13:24.431 ************************************ 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.431 10:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.432 ************************************ 00:13:24.432 START TEST nvmf_filesystem 00:13:24.432 ************************************ 00:13:24.432 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:24.695 * Looking for test storage... 00:13:24.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.695 --rc genhtml_branch_coverage=1 00:13:24.695 --rc genhtml_function_coverage=1 00:13:24.695 --rc genhtml_legend=1 00:13:24.695 --rc geninfo_all_blocks=1 00:13:24.695 --rc geninfo_unexecuted_blocks=1 00:13:24.695 00:13:24.695 ' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.695 --rc genhtml_branch_coverage=1 00:13:24.695 --rc genhtml_function_coverage=1 00:13:24.695 --rc genhtml_legend=1 00:13:24.695 --rc geninfo_all_blocks=1 00:13:24.695 --rc geninfo_unexecuted_blocks=1 00:13:24.695 00:13:24.695 ' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.695 --rc genhtml_branch_coverage=1 00:13:24.695 --rc genhtml_function_coverage=1 00:13:24.695 --rc genhtml_legend=1 00:13:24.695 --rc geninfo_all_blocks=1 00:13:24.695 --rc geninfo_unexecuted_blocks=1 00:13:24.695 00:13:24.695 ' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.695 --rc genhtml_branch_coverage=1 00:13:24.695 --rc genhtml_function_coverage=1 00:13:24.695 --rc genhtml_legend=1 00:13:24.695 --rc geninfo_all_blocks=1 00:13:24.695 --rc geninfo_unexecuted_blocks=1 00:13:24.695 00:13:24.695 ' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:24.695 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:24.696 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:24.696 #define SPDK_CONFIG_H 00:13:24.696 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:24.696 #define SPDK_CONFIG_APPS 1 00:13:24.696 #define SPDK_CONFIG_ARCH native 00:13:24.696 #undef SPDK_CONFIG_ASAN 00:13:24.696 #undef SPDK_CONFIG_AVAHI 00:13:24.696 #undef SPDK_CONFIG_CET 00:13:24.696 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:24.696 #define SPDK_CONFIG_COVERAGE 1 00:13:24.696 #define SPDK_CONFIG_CROSS_PREFIX 00:13:24.696 #undef SPDK_CONFIG_CRYPTO 00:13:24.696 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:24.696 #undef SPDK_CONFIG_CUSTOMOCF 00:13:24.696 #undef SPDK_CONFIG_DAOS 00:13:24.696 #define SPDK_CONFIG_DAOS_DIR 00:13:24.696 #define SPDK_CONFIG_DEBUG 1 00:13:24.696 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:24.696 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:24.696 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:24.696 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:24.696 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:24.696 #undef SPDK_CONFIG_DPDK_UADK 00:13:24.696 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:24.696 #define SPDK_CONFIG_EXAMPLES 1 00:13:24.696 #undef SPDK_CONFIG_FC 00:13:24.696 #define SPDK_CONFIG_FC_PATH 00:13:24.696 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:24.696 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:24.696 #define SPDK_CONFIG_FSDEV 1 00:13:24.696 #undef SPDK_CONFIG_FUSE 00:13:24.696 #undef SPDK_CONFIG_FUZZER 00:13:24.696 #define SPDK_CONFIG_FUZZER_LIB 00:13:24.696 #undef SPDK_CONFIG_GOLANG 00:13:24.696 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:24.696 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:24.696 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:24.696 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:24.696 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:24.696 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:24.696 #undef SPDK_CONFIG_HAVE_LZ4 00:13:24.696 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:24.696 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:24.696 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:24.696 #define SPDK_CONFIG_IDXD 1 00:13:24.696 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:24.696 #undef SPDK_CONFIG_IPSEC_MB 00:13:24.696 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:24.697 #define SPDK_CONFIG_ISAL 1 00:13:24.697 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:24.697 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:24.697 #define SPDK_CONFIG_LIBDIR 00:13:24.697 #undef SPDK_CONFIG_LTO 00:13:24.697 #define SPDK_CONFIG_MAX_LCORES 128 00:13:24.697 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:24.697 #define SPDK_CONFIG_NVME_CUSE 1 00:13:24.697 #undef SPDK_CONFIG_OCF 00:13:24.697 #define SPDK_CONFIG_OCF_PATH 00:13:24.697 #define SPDK_CONFIG_OPENSSL_PATH 00:13:24.697 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:24.697 #define SPDK_CONFIG_PGO_DIR 00:13:24.697 #undef SPDK_CONFIG_PGO_USE 00:13:24.697 #define SPDK_CONFIG_PREFIX /usr/local 00:13:24.697 #undef SPDK_CONFIG_RAID5F 00:13:24.697 #undef SPDK_CONFIG_RBD 00:13:24.697 #define SPDK_CONFIG_RDMA 1 00:13:24.697 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:24.697 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:24.697 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:24.697 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:24.697 #define SPDK_CONFIG_SHARED 1 00:13:24.697 #undef SPDK_CONFIG_SMA 00:13:24.697 #define SPDK_CONFIG_TESTS 1 00:13:24.697 #undef SPDK_CONFIG_TSAN 00:13:24.697 #define SPDK_CONFIG_UBLK 1 00:13:24.697 #define SPDK_CONFIG_UBSAN 1 00:13:24.697 #undef SPDK_CONFIG_UNIT_TESTS 00:13:24.697 #undef SPDK_CONFIG_URING 00:13:24.697 #define SPDK_CONFIG_URING_PATH 00:13:24.697 #undef SPDK_CONFIG_URING_ZNS 00:13:24.697 #undef SPDK_CONFIG_USDT 00:13:24.697 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:24.697 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:24.697 #define SPDK_CONFIG_VFIO_USER 1 00:13:24.697 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:24.697 #define SPDK_CONFIG_VHOST 1 00:13:24.697 #define SPDK_CONFIG_VIRTIO 1 00:13:24.697 #undef SPDK_CONFIG_VTUNE 00:13:24.697 #define SPDK_CONFIG_VTUNE_DIR 00:13:24.697 #define SPDK_CONFIG_WERROR 1 00:13:24.697 #define SPDK_CONFIG_WPDK_DIR 00:13:24.697 #undef SPDK_CONFIG_XNVME 00:13:24.697 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:24.697 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:24.698 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:24.699 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2561657 ]] 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2561657 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:24.960 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.dQNBrp 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dQNBrp/tests/target /tmp/spdk.dQNBrp 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189865025536 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6098948096 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971953664 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981489152 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=499712 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:24.961 * Looking for test storage... 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189865025536 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8313540608 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:24.961 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.962 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:31.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:31.533 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:31.533 Found net devices under 0000:86:00.0: cvl_0_0 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:31.533 Found net devices under 0000:86:00.1: cvl_0_1 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:31.533 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:31.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:13:31.534 00:13:31.534 --- 10.0.0.2 ping statistics --- 00:13:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.534 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:13:31.534 00:13:31.534 --- 10.0.0.1 ping statistics --- 00:13:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.534 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.534 ************************************ 00:13:31.534 START TEST nvmf_filesystem_no_in_capsule 00:13:31.534 ************************************ 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2564852 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2564852 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2564852 ']' 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.534 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.534 [2024-12-09 10:24:08.697175] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:31.534 [2024-12-09 10:24:08.697213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.534 [2024-12-09 10:24:08.773987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.534 [2024-12-09 10:24:08.815810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.534 [2024-12-09 10:24:08.815847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.534 [2024-12-09 10:24:08.815854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.534 [2024-12-09 10:24:08.815860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.534 [2024-12-09 10:24:08.815865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.534 [2024-12-09 10:24:08.817318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.534 [2024-12-09 10:24:08.817432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.534 [2024-12-09 10:24:08.817539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.534 [2024-12-09 10:24:08.817540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 [2024-12-09 10:24:09.566258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 [2024-12-09 10:24:09.715994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:32.099 { 00:13:32.099 "name": "Malloc1", 00:13:32.099 "aliases": [ 00:13:32.099 "4d299611-328b-46ef-8bdc-31c30a441910" 00:13:32.099 ], 00:13:32.099 "product_name": "Malloc disk", 00:13:32.099 "block_size": 512, 00:13:32.099 "num_blocks": 1048576, 00:13:32.099 "uuid": "4d299611-328b-46ef-8bdc-31c30a441910", 00:13:32.099 "assigned_rate_limits": { 00:13:32.099 "rw_ios_per_sec": 0, 00:13:32.099 "rw_mbytes_per_sec": 0, 00:13:32.099 "r_mbytes_per_sec": 0, 00:13:32.099 "w_mbytes_per_sec": 0 00:13:32.099 }, 00:13:32.099 "claimed": true, 00:13:32.099 "claim_type": "exclusive_write", 00:13:32.099 "zoned": false, 00:13:32.099 "supported_io_types": { 00:13:32.099 "read": true, 00:13:32.099 "write": true, 00:13:32.099 "unmap": true, 00:13:32.099 "flush": true, 00:13:32.099 "reset": true, 00:13:32.099 "nvme_admin": false, 00:13:32.099 "nvme_io": false, 00:13:32.099 "nvme_io_md": false, 00:13:32.099 "write_zeroes": true, 00:13:32.099 "zcopy": true, 00:13:32.099 "get_zone_info": false, 00:13:32.099 "zone_management": false, 00:13:32.099 "zone_append": false, 00:13:32.099 "compare": false, 00:13:32.099 "compare_and_write": false, 00:13:32.099 "abort": true, 00:13:32.099 "seek_hole": false, 00:13:32.099 "seek_data": false, 00:13:32.099 "copy": true, 00:13:32.099 "nvme_iov_md": false 00:13:32.099 }, 00:13:32.099 "memory_domains": [ 00:13:32.099 { 00:13:32.099 "dma_device_id": "system", 00:13:32.099 "dma_device_type": 1 00:13:32.099 }, 00:13:32.099 { 00:13:32.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.099 "dma_device_type": 2 00:13:32.099 } 00:13:32.099 ], 00:13:32.099 "driver_specific": {} 00:13:32.099 } 00:13:32.099 ]' 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:32.099 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:32.357 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:32.357 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:32.357 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:32.357 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:32.357 10:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.305 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.305 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.305 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.305 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:33.305 10:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:35.836 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:35.836 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:36.096 10:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.471 ************************************ 00:13:37.471 START TEST filesystem_ext4 00:13:37.471 ************************************ 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:37.471 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:37.471 mke2fs 1.47.0 (5-Feb-2023) 00:13:37.471 Discarding device blocks: 0/522240 done 00:13:37.471 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:37.471 Filesystem UUID: c3ca3091-ad5c-416c-989b-b13b9b6139db 00:13:37.471 Superblock backups stored on blocks: 00:13:37.471 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:37.471 00:13:37.471 Allocating group tables: 0/64 done 00:13:37.471 Writing inode tables: 0/64 done 00:13:37.471 Creating journal (8192 blocks): done 00:13:38.293 Writing superblocks and filesystem accounting information: 0/64 done 00:13:38.293 00:13:38.293 10:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:38.293 10:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2564852 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.848 00:13:44.848 real 0m7.144s 00:13:44.848 user 0m0.036s 00:13:44.848 sys 0m0.064s 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.848 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:44.848 ************************************ 00:13:44.848 END TEST filesystem_ext4 00:13:44.848 ************************************ 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.848 ************************************ 00:13:44.848 START TEST filesystem_btrfs 00:13:44.848 ************************************ 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:44.848 btrfs-progs v6.8.1 00:13:44.848 See https://btrfs.readthedocs.io for more information. 00:13:44.848 00:13:44.848 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:44.848 NOTE: several default settings have changed in version 5.15, please make sure 00:13:44.848 this does not affect your deployments: 00:13:44.848 - DUP for metadata (-m dup) 00:13:44.848 - enabled no-holes (-O no-holes) 00:13:44.848 - enabled free-space-tree (-R free-space-tree) 00:13:44.848 00:13:44.848 Label: (null) 00:13:44.848 UUID: fe14ce3c-5c56-494c-9feb-b9b37ab65e96 00:13:44.848 Node size: 16384 00:13:44.848 Sector size: 4096 (CPU page size: 4096) 00:13:44.848 Filesystem size: 510.00MiB 00:13:44.848 Block group profiles: 00:13:44.848 Data: single 8.00MiB 00:13:44.848 Metadata: DUP 32.00MiB 00:13:44.848 System: DUP 8.00MiB 00:13:44.848 SSD detected: yes 00:13:44.848 Zoned device: no 00:13:44.848 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:44.848 Checksum: crc32c 00:13:44.848 Number of devices: 1 00:13:44.848 Devices: 00:13:44.848 ID SIZE PATH 00:13:44.848 1 510.00MiB /dev/nvme0n1p1 00:13:44.848 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:44.848 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2564852 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:45.783 00:13:45.783 real 0m1.161s 00:13:45.783 user 0m0.021s 00:13:45.783 sys 0m0.120s 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:45.783 ************************************ 00:13:45.783 END TEST filesystem_btrfs 00:13:45.783 ************************************ 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.783 ************************************ 00:13:45.783 START TEST filesystem_xfs 00:13:45.783 ************************************ 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:45.783 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:45.783 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:45.783 = sectsz=512 attr=2, projid32bit=1 00:13:45.783 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:45.783 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:45.783 data = bsize=4096 blocks=130560, imaxpct=25 00:13:45.783 = sunit=0 swidth=0 blks 00:13:45.783 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:45.783 log =internal log bsize=4096 blocks=16384, version=2 00:13:45.783 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:45.783 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:46.716 Discarding blocks...Done. 00:13:46.716 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:46.716 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2564852 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:48.610 00:13:48.610 real 0m2.896s 00:13:48.610 user 0m0.021s 00:13:48.610 sys 0m0.076s 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:48.610 ************************************ 00:13:48.610 END TEST filesystem_xfs 00:13:48.610 ************************************ 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:48.610 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2564852 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2564852 ']' 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2564852 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564852 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564852' 00:13:48.868 killing process with pid 2564852 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2564852 00:13:48.868 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2564852 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:49.150 00:13:49.150 real 0m18.176s 00:13:49.150 user 1m11.699s 00:13:49.150 sys 0m1.422s 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 ************************************ 00:13:49.150 END TEST nvmf_filesystem_no_in_capsule 00:13:49.150 ************************************ 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.150 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:49.425 ************************************ 00:13:49.425 START TEST nvmf_filesystem_in_capsule 00:13:49.425 ************************************ 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2568311 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2568311 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2568311 ']' 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.425 10:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.425 [2024-12-09 10:24:26.945086] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:49.425 [2024-12-09 10:24:26.945124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.425 [2024-12-09 10:24:27.025743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.425 [2024-12-09 10:24:27.069331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.425 [2024-12-09 10:24:27.069361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.425 [2024-12-09 10:24:27.069368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.425 [2024-12-09 10:24:27.069374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.425 [2024-12-09 10:24:27.069379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.425 [2024-12-09 10:24:27.070847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.425 [2024-12-09 10:24:27.070866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.425 [2024-12-09 10:24:27.070903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.425 [2024-12-09 10:24:27.070905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 [2024-12-09 10:24:27.827583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 Malloc1 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.396 [2024-12-09 10:24:27.968843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:50.396 { 00:13:50.396 "name": "Malloc1", 00:13:50.396 "aliases": [ 00:13:50.396 "1eaf0d2a-bde4-4117-a652-e693828cc85b" 00:13:50.396 ], 00:13:50.396 "product_name": "Malloc disk", 00:13:50.396 "block_size": 512, 00:13:50.396 "num_blocks": 1048576, 00:13:50.396 "uuid": "1eaf0d2a-bde4-4117-a652-e693828cc85b", 00:13:50.396 "assigned_rate_limits": { 00:13:50.396 "rw_ios_per_sec": 0, 00:13:50.396 "rw_mbytes_per_sec": 0, 00:13:50.396 "r_mbytes_per_sec": 0, 00:13:50.396 "w_mbytes_per_sec": 0 00:13:50.396 }, 00:13:50.396 "claimed": true, 00:13:50.396 "claim_type": "exclusive_write", 00:13:50.396 "zoned": false, 00:13:50.396 "supported_io_types": { 00:13:50.396 "read": true, 00:13:50.396 "write": true, 00:13:50.396 "unmap": true, 00:13:50.396 "flush": true, 00:13:50.396 "reset": true, 00:13:50.396 "nvme_admin": false, 00:13:50.396 "nvme_io": false, 00:13:50.396 "nvme_io_md": false, 00:13:50.396 "write_zeroes": true, 00:13:50.396 "zcopy": true, 00:13:50.396 "get_zone_info": false, 00:13:50.396 "zone_management": false, 00:13:50.396 "zone_append": false, 00:13:50.396 "compare": false, 00:13:50.396 "compare_and_write": false, 00:13:50.396 "abort": true, 00:13:50.396 "seek_hole": false, 00:13:50.396 "seek_data": false, 00:13:50.396 "copy": true, 00:13:50.396 "nvme_iov_md": false 00:13:50.396 }, 00:13:50.396 "memory_domains": [ 00:13:50.396 { 00:13:50.396 "dma_device_id": "system", 00:13:50.396 "dma_device_type": 1 00:13:50.396 }, 00:13:50.396 { 00:13:50.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.396 "dma_device_type": 2 00:13:50.396 } 00:13:50.396 ], 00:13:50.396 "driver_specific": {} 00:13:50.396 } 00:13:50.396 ]' 00:13:50.396 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:50.396 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.767 10:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.768 10:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:51.768 10:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.768 10:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:51.768 10:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:53.663 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:53.664 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:53.664 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:53.664 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:53.664 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:53.664 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:53.921 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:54.178 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.545 ************************************ 00:13:55.545 START TEST filesystem_in_capsule_ext4 00:13:55.545 ************************************ 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:55.545 10:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:55.545 mke2fs 1.47.0 (5-Feb-2023) 00:13:55.545 Discarding device blocks: 0/522240 done 00:13:55.545 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:55.545 Filesystem UUID: 76b7201b-52ee-4a7a-96fc-db2dcf95c521 00:13:55.545 Superblock backups stored on blocks: 00:13:55.545 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:55.545 00:13:55.545 Allocating group tables: 0/64 done 00:13:55.545 Writing inode tables: 0/64 done 00:13:55.545 Creating journal (8192 blocks): done 00:13:57.748 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:13:57.748 00:13:57.748 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:57.748 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2568311 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:04.299 00:14:04.299 real 0m8.048s 00:14:04.299 user 0m0.029s 00:14:04.299 sys 0m0.072s 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.299 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:04.299 ************************************ 00:14:04.299 END TEST filesystem_in_capsule_ext4 00:14:04.299 ************************************ 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.299 ************************************ 00:14:04.299 START TEST filesystem_in_capsule_btrfs 00:14:04.299 ************************************ 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:04.299 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:04.300 btrfs-progs v6.8.1 00:14:04.300 See https://btrfs.readthedocs.io for more information. 00:14:04.300 00:14:04.300 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:04.300 NOTE: several default settings have changed in version 5.15, please make sure 00:14:04.300 this does not affect your deployments: 00:14:04.300 - DUP for metadata (-m dup) 00:14:04.300 - enabled no-holes (-O no-holes) 00:14:04.300 - enabled free-space-tree (-R free-space-tree) 00:14:04.300 00:14:04.300 Label: (null) 00:14:04.300 UUID: 531e7841-fea3-468e-9a00-4b0c1d33bbea 00:14:04.300 Node size: 16384 00:14:04.300 Sector size: 4096 (CPU page size: 4096) 00:14:04.300 Filesystem size: 510.00MiB 00:14:04.300 Block group profiles: 00:14:04.300 Data: single 8.00MiB 00:14:04.300 Metadata: DUP 32.00MiB 00:14:04.300 System: DUP 8.00MiB 00:14:04.300 SSD detected: yes 00:14:04.300 Zoned device: no 00:14:04.300 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:04.300 Checksum: crc32c 00:14:04.300 Number of devices: 1 00:14:04.300 Devices: 00:14:04.300 ID SIZE PATH 00:14:04.300 1 510.00MiB /dev/nvme0n1p1 00:14:04.300 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2568311 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:04.300 00:14:04.300 real 0m0.666s 00:14:04.300 user 0m0.022s 00:14:04.300 sys 0m0.118s 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:04.300 ************************************ 00:14:04.300 END TEST filesystem_in_capsule_btrfs 00:14:04.300 ************************************ 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.300 ************************************ 00:14:04.300 START TEST filesystem_in_capsule_xfs 00:14:04.300 ************************************ 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:04.300 10:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:05.234 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:05.234 = sectsz=512 attr=2, projid32bit=1 00:14:05.234 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:05.234 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:05.234 data = bsize=4096 blocks=130560, imaxpct=25 00:14:05.234 = sunit=0 swidth=0 blks 00:14:05.234 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:05.234 log =internal log bsize=4096 blocks=16384, version=2 00:14:05.234 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:05.234 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:06.167 Discarding blocks...Done. 00:14:06.167 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:06.167 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:08.699 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2568311 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:08.700 00:14:08.700 real 0m4.073s 00:14:08.700 user 0m0.030s 00:14:08.700 sys 0m0.071s 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:08.700 ************************************ 00:14:08.700 END TEST filesystem_in_capsule_xfs 00:14:08.700 ************************************ 00:14:08.700 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2568311 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2568311 ']' 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2568311 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568311 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568311' 00:14:08.700 killing process with pid 2568311 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2568311 00:14:08.700 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2568311 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:08.958 00:14:08.958 real 0m19.599s 00:14:08.958 user 1m17.410s 00:14:08.958 sys 0m1.422s 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.958 ************************************ 00:14:08.958 END TEST nvmf_filesystem_in_capsule 00:14:08.958 ************************************ 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.958 rmmod nvme_tcp 00:14:08.958 rmmod nvme_fabrics 00:14:08.958 rmmod nvme_keyring 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:08.958 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.959 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:11.494 00:14:11.494 real 0m46.527s 00:14:11.494 user 2m31.093s 00:14:11.494 sys 0m7.624s 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:11.494 ************************************ 00:14:11.494 END TEST nvmf_filesystem 00:14:11.494 ************************************ 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.494 ************************************ 00:14:11.494 START TEST nvmf_target_discovery 00:14:11.494 ************************************ 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:11.494 * Looking for test storage... 00:14:11.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.494 --rc genhtml_branch_coverage=1 00:14:11.494 --rc genhtml_function_coverage=1 00:14:11.494 --rc genhtml_legend=1 00:14:11.494 --rc geninfo_all_blocks=1 00:14:11.494 --rc geninfo_unexecuted_blocks=1 00:14:11.494 00:14:11.494 ' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.494 --rc genhtml_branch_coverage=1 00:14:11.494 --rc genhtml_function_coverage=1 00:14:11.494 --rc genhtml_legend=1 00:14:11.494 --rc geninfo_all_blocks=1 00:14:11.494 --rc geninfo_unexecuted_blocks=1 00:14:11.494 00:14:11.494 ' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.494 --rc genhtml_branch_coverage=1 00:14:11.494 --rc genhtml_function_coverage=1 00:14:11.494 --rc genhtml_legend=1 00:14:11.494 --rc geninfo_all_blocks=1 00:14:11.494 --rc geninfo_unexecuted_blocks=1 00:14:11.494 00:14:11.494 ' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.494 --rc genhtml_branch_coverage=1 00:14:11.494 --rc genhtml_function_coverage=1 00:14:11.494 --rc genhtml_legend=1 00:14:11.494 --rc geninfo_all_blocks=1 00:14:11.494 --rc geninfo_unexecuted_blocks=1 00:14:11.494 00:14:11.494 ' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.494 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.495 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:18.063 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:18.063 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:18.063 Found net devices under 0000:86:00.0: cvl_0_0 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:18.063 Found net devices under 0000:86:00.1: cvl_0_1 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.063 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:14:18.064 00:14:18.064 --- 10.0.0.2 ping statistics --- 00:14:18.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.064 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:18.064 00:14:18.064 --- 10.0.0.1 ping statistics --- 00:14:18.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.064 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2575287 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2575287 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2575287 ']' 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.064 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 [2024-12-09 10:24:54.991088] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:14:18.064 [2024-12-09 10:24:54.991130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.064 [2024-12-09 10:24:55.070086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.064 [2024-12-09 10:24:55.112172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.064 [2024-12-09 10:24:55.112208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.064 [2024-12-09 10:24:55.112216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.064 [2024-12-09 10:24:55.112222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.064 [2024-12-09 10:24:55.112226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.064 [2024-12-09 10:24:55.113623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.064 [2024-12-09 10:24:55.113733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.064 [2024-12-09 10:24:55.113850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.064 [2024-12-09 10:24:55.113851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 [2024-12-09 10:24:55.251459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 Null1 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 [2024-12-09 10:24:55.308957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 Null2 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.064 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 Null3 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 Null4 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:18.065 00:14:18.065 Discovery Log Number of Records 6, Generation counter 6 00:14:18.065 =====Discovery Log Entry 0====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: current discovery subsystem 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4420 00:14:18.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: explicit discovery connections, duplicate discovery information 00:14:18.065 sectype: none 00:14:18.065 =====Discovery Log Entry 1====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: nvme subsystem 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4420 00:14:18.065 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: none 00:14:18.065 sectype: none 00:14:18.065 =====Discovery Log Entry 2====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: nvme subsystem 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4420 00:14:18.065 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: none 00:14:18.065 sectype: none 00:14:18.065 =====Discovery Log Entry 3====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: nvme subsystem 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4420 00:14:18.065 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: none 00:14:18.065 sectype: none 00:14:18.065 =====Discovery Log Entry 4====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: nvme subsystem 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4420 00:14:18.065 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: none 00:14:18.065 sectype: none 00:14:18.065 =====Discovery Log Entry 5====== 00:14:18.065 trtype: tcp 00:14:18.065 adrfam: ipv4 00:14:18.065 subtype: discovery subsystem referral 00:14:18.065 treq: not required 00:14:18.065 portid: 0 00:14:18.065 trsvcid: 4430 00:14:18.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:18.065 traddr: 10.0.0.2 00:14:18.065 eflags: none 00:14:18.065 sectype: none 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:18.065 Perform nvmf subsystem discovery via RPC 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 [ 00:14:18.065 { 00:14:18.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:18.065 "subtype": "Discovery", 00:14:18.065 "listen_addresses": [ 00:14:18.065 { 00:14:18.065 "trtype": "TCP", 00:14:18.065 "adrfam": "IPv4", 00:14:18.065 "traddr": "10.0.0.2", 00:14:18.065 "trsvcid": "4420" 00:14:18.065 } 00:14:18.065 ], 00:14:18.065 "allow_any_host": true, 00:14:18.065 "hosts": [] 00:14:18.065 }, 00:14:18.065 { 00:14:18.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.065 "subtype": "NVMe", 00:14:18.065 "listen_addresses": [ 00:14:18.065 { 00:14:18.065 "trtype": "TCP", 00:14:18.065 "adrfam": "IPv4", 00:14:18.065 "traddr": "10.0.0.2", 00:14:18.065 "trsvcid": "4420" 00:14:18.065 } 00:14:18.065 ], 00:14:18.065 "allow_any_host": true, 00:14:18.065 "hosts": [], 00:14:18.065 "serial_number": "SPDK00000000000001", 00:14:18.065 "model_number": "SPDK bdev Controller", 00:14:18.065 "max_namespaces": 32, 00:14:18.065 "min_cntlid": 1, 00:14:18.065 "max_cntlid": 65519, 00:14:18.065 "namespaces": [ 00:14:18.065 { 00:14:18.065 "nsid": 1, 00:14:18.065 "bdev_name": "Null1", 00:14:18.065 "name": "Null1", 00:14:18.065 "nguid": "1F6AADB5540E482F9345EBCAEEF2480F", 00:14:18.065 "uuid": "1f6aadb5-540e-482f-9345-ebcaeef2480f" 00:14:18.065 } 00:14:18.065 ] 00:14:18.065 }, 00:14:18.065 { 00:14:18.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:18.065 "subtype": "NVMe", 00:14:18.065 "listen_addresses": [ 00:14:18.065 { 00:14:18.065 "trtype": "TCP", 00:14:18.065 "adrfam": "IPv4", 00:14:18.066 "traddr": "10.0.0.2", 00:14:18.066 "trsvcid": "4420" 00:14:18.066 } 00:14:18.066 ], 00:14:18.066 "allow_any_host": true, 00:14:18.066 "hosts": [], 00:14:18.066 "serial_number": "SPDK00000000000002", 00:14:18.066 "model_number": "SPDK bdev Controller", 00:14:18.066 "max_namespaces": 32, 00:14:18.066 "min_cntlid": 1, 00:14:18.066 "max_cntlid": 65519, 00:14:18.066 "namespaces": [ 00:14:18.066 { 00:14:18.066 "nsid": 1, 00:14:18.066 "bdev_name": "Null2", 00:14:18.066 "name": "Null2", 00:14:18.066 "nguid": "B4EB0966B0EF469BAD4468D47E121485", 00:14:18.066 "uuid": "b4eb0966-b0ef-469b-ad44-68d47e121485" 00:14:18.066 } 00:14:18.066 ] 00:14:18.066 }, 00:14:18.066 { 00:14:18.066 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:18.066 "subtype": "NVMe", 00:14:18.066 "listen_addresses": [ 00:14:18.066 { 00:14:18.066 "trtype": "TCP", 00:14:18.066 "adrfam": "IPv4", 00:14:18.066 "traddr": "10.0.0.2", 00:14:18.066 "trsvcid": "4420" 00:14:18.066 } 00:14:18.066 ], 00:14:18.066 "allow_any_host": true, 00:14:18.066 "hosts": [], 00:14:18.066 "serial_number": "SPDK00000000000003", 00:14:18.066 "model_number": "SPDK bdev Controller", 00:14:18.066 "max_namespaces": 32, 00:14:18.066 "min_cntlid": 1, 00:14:18.066 "max_cntlid": 65519, 00:14:18.066 "namespaces": [ 00:14:18.066 { 00:14:18.066 "nsid": 1, 00:14:18.066 "bdev_name": "Null3", 00:14:18.066 "name": "Null3", 00:14:18.066 "nguid": "3E55F52F21CA4E53A6325425D42AD56F", 00:14:18.066 "uuid": "3e55f52f-21ca-4e53-a632-5425d42ad56f" 00:14:18.066 } 00:14:18.066 ] 00:14:18.066 }, 00:14:18.066 { 00:14:18.066 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:18.066 "subtype": "NVMe", 00:14:18.066 "listen_addresses": [ 00:14:18.066 { 00:14:18.066 "trtype": "TCP", 00:14:18.066 "adrfam": "IPv4", 00:14:18.066 "traddr": "10.0.0.2", 00:14:18.066 "trsvcid": "4420" 00:14:18.066 } 00:14:18.066 ], 00:14:18.066 "allow_any_host": true, 00:14:18.066 "hosts": [], 00:14:18.066 "serial_number": "SPDK00000000000004", 00:14:18.066 "model_number": "SPDK bdev Controller", 00:14:18.066 "max_namespaces": 32, 00:14:18.066 "min_cntlid": 1, 00:14:18.066 "max_cntlid": 65519, 00:14:18.066 "namespaces": [ 00:14:18.066 { 00:14:18.066 "nsid": 1, 00:14:18.066 "bdev_name": "Null4", 00:14:18.066 "name": "Null4", 00:14:18.066 "nguid": "1E9C9FF92F9343B98EAC729CE8E0EF4F", 00:14:18.066 "uuid": "1e9c9ff9-2f93-43b9-8eac-729ce8e0ef4f" 00:14:18.066 } 00:14:18.066 ] 00:14:18.066 } 00:14:18.066 ] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.066 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.323 rmmod nvme_tcp 00:14:18.323 rmmod nvme_fabrics 00:14:18.323 rmmod nvme_keyring 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2575287 ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2575287 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2575287 ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2575287 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575287 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575287' 00:14:18.323 killing process with pid 2575287 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2575287 00:14:18.323 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2575287 00:14:18.581 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.581 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.582 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.482 00:14:20.482 real 0m9.403s 00:14:20.482 user 0m5.735s 00:14:20.482 sys 0m4.865s 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.482 ************************************ 00:14:20.482 END TEST nvmf_target_discovery 00:14:20.482 ************************************ 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.482 10:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.742 ************************************ 00:14:20.742 START TEST nvmf_referrals 00:14:20.742 ************************************ 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:20.742 * Looking for test storage... 00:14:20.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.742 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:20.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.743 --rc genhtml_branch_coverage=1 00:14:20.743 --rc genhtml_function_coverage=1 00:14:20.743 --rc genhtml_legend=1 00:14:20.743 --rc geninfo_all_blocks=1 00:14:20.743 --rc geninfo_unexecuted_blocks=1 00:14:20.743 00:14:20.743 ' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:20.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.743 --rc genhtml_branch_coverage=1 00:14:20.743 --rc genhtml_function_coverage=1 00:14:20.743 --rc genhtml_legend=1 00:14:20.743 --rc geninfo_all_blocks=1 00:14:20.743 --rc geninfo_unexecuted_blocks=1 00:14:20.743 00:14:20.743 ' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:20.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.743 --rc genhtml_branch_coverage=1 00:14:20.743 --rc genhtml_function_coverage=1 00:14:20.743 --rc genhtml_legend=1 00:14:20.743 --rc geninfo_all_blocks=1 00:14:20.743 --rc geninfo_unexecuted_blocks=1 00:14:20.743 00:14:20.743 ' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:20.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.743 --rc genhtml_branch_coverage=1 00:14:20.743 --rc genhtml_function_coverage=1 00:14:20.743 --rc genhtml_legend=1 00:14:20.743 --rc geninfo_all_blocks=1 00:14:20.743 --rc geninfo_unexecuted_blocks=1 00:14:20.743 00:14:20.743 ' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.743 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.313 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.313 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.313 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:14:27.314 00:14:27.314 --- 10.0.0.2 ping statistics --- 00:14:27.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.314 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:14:27.314 00:14:27.314 --- 10.0.0.1 ping statistics --- 00:14:27.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.314 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2578883 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2578883 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2578883 ']' 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 [2024-12-09 10:25:04.450040] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:14:27.314 [2024-12-09 10:25:04.450085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.314 [2024-12-09 10:25:04.529004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.314 [2024-12-09 10:25:04.569289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.314 [2024-12-09 10:25:04.569327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.314 [2024-12-09 10:25:04.569334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.314 [2024-12-09 10:25:04.569340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.314 [2024-12-09 10:25:04.569345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.314 [2024-12-09 10:25:04.570776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.314 [2024-12-09 10:25:04.570896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.314 [2024-12-09 10:25:04.570930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.314 [2024-12-09 10:25:04.570929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 [2024-12-09 10:25:04.722199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 [2024-12-09 10:25:04.759001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:27.314 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.315 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.572 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.573 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.830 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.087 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:28.345 10:25:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.601 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:28.857 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:29.114 rmmod nvme_tcp 00:14:29.114 rmmod nvme_fabrics 00:14:29.114 rmmod nvme_keyring 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2578883 ']' 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2578883 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2578883 ']' 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2578883 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.114 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578883 00:14:29.371 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.371 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.371 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578883' 00:14:29.371 killing process with pid 2578883 00:14:29.371 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2578883 00:14:29.371 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2578883 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.371 10:25:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.906 00:14:31.906 real 0m10.862s 00:14:31.906 user 0m12.297s 00:14:31.906 sys 0m5.208s 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:31.906 ************************************ 00:14:31.906 END TEST nvmf_referrals 00:14:31.906 ************************************ 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.906 ************************************ 00:14:31.906 START TEST nvmf_connect_disconnect 00:14:31.906 ************************************ 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:31.906 * Looking for test storage... 00:14:31.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.906 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.907 --rc genhtml_branch_coverage=1 00:14:31.907 --rc genhtml_function_coverage=1 00:14:31.907 --rc genhtml_legend=1 00:14:31.907 --rc geninfo_all_blocks=1 00:14:31.907 --rc geninfo_unexecuted_blocks=1 00:14:31.907 00:14:31.907 ' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.907 --rc genhtml_branch_coverage=1 00:14:31.907 --rc genhtml_function_coverage=1 00:14:31.907 --rc genhtml_legend=1 00:14:31.907 --rc geninfo_all_blocks=1 00:14:31.907 --rc geninfo_unexecuted_blocks=1 00:14:31.907 00:14:31.907 ' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.907 --rc genhtml_branch_coverage=1 00:14:31.907 --rc genhtml_function_coverage=1 00:14:31.907 --rc genhtml_legend=1 00:14:31.907 --rc geninfo_all_blocks=1 00:14:31.907 --rc geninfo_unexecuted_blocks=1 00:14:31.907 00:14:31.907 ' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.907 --rc genhtml_branch_coverage=1 00:14:31.907 --rc genhtml_function_coverage=1 00:14:31.907 --rc genhtml_legend=1 00:14:31.907 --rc geninfo_all_blocks=1 00:14:31.907 --rc geninfo_unexecuted_blocks=1 00:14:31.907 00:14:31.907 ' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.907 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.908 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.482 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.483 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.483 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:14:38.483 00:14:38.483 --- 10.0.0.2 ping statistics --- 00:14:38.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.483 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:38.483 00:14:38.483 --- 10.0.0.1 ping statistics --- 00:14:38.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.483 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2582929 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2582929 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2582929 ']' 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.483 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 [2024-12-09 10:25:15.377155] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:14:38.484 [2024-12-09 10:25:15.377203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.484 [2024-12-09 10:25:15.456106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.484 [2024-12-09 10:25:15.498505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.484 [2024-12-09 10:25:15.498540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.484 [2024-12-09 10:25:15.498547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.484 [2024-12-09 10:25:15.498553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.484 [2024-12-09 10:25:15.498558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.484 [2024-12-09 10:25:15.500096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.484 [2024-12-09 10:25:15.500206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.484 [2024-12-09 10:25:15.500311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.484 [2024-12-09 10:25:15.500313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 [2024-12-09 10:25:15.646973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 [2024-12-09 10:25:15.709686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:38.484 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:41.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.988 10:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.988 rmmod nvme_tcp 00:14:54.988 rmmod nvme_fabrics 00:14:54.988 rmmod nvme_keyring 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2582929 ']' 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2582929 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2582929 ']' 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2582929 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:54.988 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582929 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582929' 00:14:54.989 killing process with pid 2582929 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2582929 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2582929 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.989 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.988 00:14:56.988 real 0m25.217s 00:14:56.988 user 1m8.450s 00:14:56.988 sys 0m5.802s 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:56.988 ************************************ 00:14:56.988 END TEST nvmf_connect_disconnect 00:14:56.988 ************************************ 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.988 ************************************ 00:14:56.988 START TEST nvmf_multitarget 00:14:56.988 ************************************ 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:56.988 * Looking for test storage... 00:14:56.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.988 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.989 --rc genhtml_branch_coverage=1 00:14:56.989 --rc genhtml_function_coverage=1 00:14:56.989 --rc genhtml_legend=1 00:14:56.989 --rc geninfo_all_blocks=1 00:14:56.989 --rc geninfo_unexecuted_blocks=1 00:14:56.989 00:14:56.989 ' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.989 --rc genhtml_branch_coverage=1 00:14:56.989 --rc genhtml_function_coverage=1 00:14:56.989 --rc genhtml_legend=1 00:14:56.989 --rc geninfo_all_blocks=1 00:14:56.989 --rc geninfo_unexecuted_blocks=1 00:14:56.989 00:14:56.989 ' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.989 --rc genhtml_branch_coverage=1 00:14:56.989 --rc genhtml_function_coverage=1 00:14:56.989 --rc genhtml_legend=1 00:14:56.989 --rc geninfo_all_blocks=1 00:14:56.989 --rc geninfo_unexecuted_blocks=1 00:14:56.989 00:14:56.989 ' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.989 --rc genhtml_branch_coverage=1 00:14:56.989 --rc genhtml_function_coverage=1 00:14:56.989 --rc genhtml_legend=1 00:14:56.989 --rc geninfo_all_blocks=1 00:14:56.989 --rc geninfo_unexecuted_blocks=1 00:14:56.989 00:14:56.989 ' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.989 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:03.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:03.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:03.559 Found net devices under 0000:86:00.0: cvl_0_0 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:03.559 Found net devices under 0000:86:00.1: cvl_0_1 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:03.559 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:03.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:15:03.560 00:15:03.560 --- 10.0.0.2 ping statistics --- 00:15:03.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.560 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:03.560 00:15:03.560 --- 10.0.0.1 ping statistics --- 00:15:03.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.560 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2589340 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2589340 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2589340 ']' 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.560 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:03.560 [2024-12-09 10:25:40.663087] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:15:03.560 [2024-12-09 10:25:40.663132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.560 [2024-12-09 10:25:40.748139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.560 [2024-12-09 10:25:40.789828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.560 [2024-12-09 10:25:40.789868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.560 [2024-12-09 10:25:40.789876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.560 [2024-12-09 10:25:40.789881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.560 [2024-12-09 10:25:40.789886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.560 [2024-12-09 10:25:40.794827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.560 [2024-12-09 10:25:40.794852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.560 [2024-12-09 10:25:40.794971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.560 [2024-12-09 10:25:40.794970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.817 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.817 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:03.817 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.817 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.817 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:04.073 "nvmf_tgt_1" 00:15:04.073 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:04.330 "nvmf_tgt_2" 00:15:04.330 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:04.330 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:04.330 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:04.330 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:04.587 true 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:04.587 true 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:04.587 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:04.844 rmmod nvme_tcp 00:15:04.844 rmmod nvme_fabrics 00:15:04.844 rmmod nvme_keyring 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2589340 ']' 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2589340 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2589340 ']' 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2589340 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589340 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589340' 00:15:04.844 killing process with pid 2589340 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2589340 00:15:04.844 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2589340 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.102 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:07.004 00:15:07.004 real 0m10.209s 00:15:07.004 user 0m9.895s 00:15:07.004 sys 0m4.849s 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:07.004 ************************************ 00:15:07.004 END TEST nvmf_multitarget 00:15:07.004 ************************************ 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.004 10:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.263 ************************************ 00:15:07.263 START TEST nvmf_rpc 00:15:07.263 ************************************ 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:07.263 * Looking for test storage... 00:15:07.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.263 --rc genhtml_branch_coverage=1 00:15:07.263 --rc genhtml_function_coverage=1 00:15:07.263 --rc genhtml_legend=1 00:15:07.263 --rc geninfo_all_blocks=1 00:15:07.263 --rc geninfo_unexecuted_blocks=1 00:15:07.263 00:15:07.263 ' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.263 --rc genhtml_branch_coverage=1 00:15:07.263 --rc genhtml_function_coverage=1 00:15:07.263 --rc genhtml_legend=1 00:15:07.263 --rc geninfo_all_blocks=1 00:15:07.263 --rc geninfo_unexecuted_blocks=1 00:15:07.263 00:15:07.263 ' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.263 --rc genhtml_branch_coverage=1 00:15:07.263 --rc genhtml_function_coverage=1 00:15:07.263 --rc genhtml_legend=1 00:15:07.263 --rc geninfo_all_blocks=1 00:15:07.263 --rc geninfo_unexecuted_blocks=1 00:15:07.263 00:15:07.263 ' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.263 --rc genhtml_branch_coverage=1 00:15:07.263 --rc genhtml_function_coverage=1 00:15:07.263 --rc genhtml_legend=1 00:15:07.263 --rc geninfo_all_blocks=1 00:15:07.263 --rc geninfo_unexecuted_blocks=1 00:15:07.263 00:15:07.263 ' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.263 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.264 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:13.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:13.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:13.831 Found net devices under 0000:86:00.0: cvl_0_0 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.831 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:13.832 Found net devices under 0000:86:00.1: cvl_0_1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:13.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:15:13.832 00:15:13.832 --- 10.0.0.2 ping statistics --- 00:15:13.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.832 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:15:13.832 00:15:13.832 --- 10.0.0.1 ping statistics --- 00:15:13.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.832 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2593130 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2593130 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2593130 ']' 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.832 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.832 [2024-12-09 10:25:50.978094] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:15:13.832 [2024-12-09 10:25:50.978135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.832 [2024-12-09 10:25:51.057077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.832 [2024-12-09 10:25:51.097103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.832 [2024-12-09 10:25:51.097140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.832 [2024-12-09 10:25:51.097146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.832 [2024-12-09 10:25:51.097152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.832 [2024-12-09 10:25:51.097160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.832 [2024-12-09 10:25:51.098782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.832 [2024-12-09 10:25:51.098893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.832 [2024-12-09 10:25:51.098929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.832 [2024-12-09 10:25:51.098931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.089 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.089 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:14.089 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.089 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.089 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.345 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:14.345 "tick_rate": 2100000000, 00:15:14.345 "poll_groups": [ 00:15:14.345 { 00:15:14.345 "name": "nvmf_tgt_poll_group_000", 00:15:14.345 "admin_qpairs": 0, 00:15:14.345 "io_qpairs": 0, 00:15:14.345 "current_admin_qpairs": 0, 00:15:14.345 "current_io_qpairs": 0, 00:15:14.345 "pending_bdev_io": 0, 00:15:14.345 "completed_nvme_io": 0, 00:15:14.345 "transports": [] 00:15:14.345 }, 00:15:14.345 { 00:15:14.345 "name": "nvmf_tgt_poll_group_001", 00:15:14.345 "admin_qpairs": 0, 00:15:14.345 "io_qpairs": 0, 00:15:14.345 "current_admin_qpairs": 0, 00:15:14.345 "current_io_qpairs": 0, 00:15:14.345 "pending_bdev_io": 0, 00:15:14.345 "completed_nvme_io": 0, 00:15:14.345 "transports": [] 00:15:14.345 }, 00:15:14.345 { 00:15:14.345 "name": "nvmf_tgt_poll_group_002", 00:15:14.345 "admin_qpairs": 0, 00:15:14.345 "io_qpairs": 0, 00:15:14.345 "current_admin_qpairs": 0, 00:15:14.345 "current_io_qpairs": 0, 00:15:14.345 "pending_bdev_io": 0, 00:15:14.345 "completed_nvme_io": 0, 00:15:14.345 "transports": [] 00:15:14.345 }, 00:15:14.345 { 00:15:14.345 "name": "nvmf_tgt_poll_group_003", 00:15:14.345 "admin_qpairs": 0, 00:15:14.345 "io_qpairs": 0, 00:15:14.345 "current_admin_qpairs": 0, 00:15:14.345 "current_io_qpairs": 0, 00:15:14.345 "pending_bdev_io": 0, 00:15:14.345 "completed_nvme_io": 0, 00:15:14.345 "transports": [] 00:15:14.345 } 00:15:14.345 ] 00:15:14.346 }' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.346 [2024-12-09 10:25:51.955724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:14.346 "tick_rate": 2100000000, 00:15:14.346 "poll_groups": [ 00:15:14.346 { 00:15:14.346 "name": "nvmf_tgt_poll_group_000", 00:15:14.346 "admin_qpairs": 0, 00:15:14.346 "io_qpairs": 0, 00:15:14.346 "current_admin_qpairs": 0, 00:15:14.346 "current_io_qpairs": 0, 00:15:14.346 "pending_bdev_io": 0, 00:15:14.346 "completed_nvme_io": 0, 00:15:14.346 "transports": [ 00:15:14.346 { 00:15:14.346 "trtype": "TCP" 00:15:14.346 } 00:15:14.346 ] 00:15:14.346 }, 00:15:14.346 { 00:15:14.346 "name": "nvmf_tgt_poll_group_001", 00:15:14.346 "admin_qpairs": 0, 00:15:14.346 "io_qpairs": 0, 00:15:14.346 "current_admin_qpairs": 0, 00:15:14.346 "current_io_qpairs": 0, 00:15:14.346 "pending_bdev_io": 0, 00:15:14.346 "completed_nvme_io": 0, 00:15:14.346 "transports": [ 00:15:14.346 { 00:15:14.346 "trtype": "TCP" 00:15:14.346 } 00:15:14.346 ] 00:15:14.346 }, 00:15:14.346 { 00:15:14.346 "name": "nvmf_tgt_poll_group_002", 00:15:14.346 "admin_qpairs": 0, 00:15:14.346 "io_qpairs": 0, 00:15:14.346 "current_admin_qpairs": 0, 00:15:14.346 "current_io_qpairs": 0, 00:15:14.346 "pending_bdev_io": 0, 00:15:14.346 "completed_nvme_io": 0, 00:15:14.346 "transports": [ 00:15:14.346 { 00:15:14.346 "trtype": "TCP" 00:15:14.346 } 00:15:14.346 ] 00:15:14.346 }, 00:15:14.346 { 00:15:14.346 "name": "nvmf_tgt_poll_group_003", 00:15:14.346 "admin_qpairs": 0, 00:15:14.346 "io_qpairs": 0, 00:15:14.346 "current_admin_qpairs": 0, 00:15:14.346 "current_io_qpairs": 0, 00:15:14.346 "pending_bdev_io": 0, 00:15:14.346 "completed_nvme_io": 0, 00:15:14.346 "transports": [ 00:15:14.346 { 00:15:14.346 "trtype": "TCP" 00:15:14.346 } 00:15:14.346 ] 00:15:14.346 } 00:15:14.346 ] 00:15:14.346 }' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:14.346 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.346 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 Malloc1 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 [2024-12-09 10:25:52.129457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:14.603 [2024-12-09 10:25:52.158170] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:15:14.603 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:14.603 could not add new controller: failed to write to nvme-fabrics device 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.603 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.972 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.972 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:15.972 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.972 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:15.972 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.859 [2024-12-09 10:25:55.520083] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:15:17.859 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:17.859 could not add new controller: failed to write to nvme-fabrics device 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.859 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.224 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.224 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:19.224 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.224 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:19.224 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.116 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.373 [2024-12-09 10:25:58.846967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.373 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.743 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.743 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.743 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.743 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:22.743 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 [2024-12-09 10:26:02.220422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.635 10:26:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.002 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.002 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:26.002 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.002 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:26.003 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.897 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.898 [2024-12-09 10:26:05.482985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.898 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.309 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.309 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:29.309 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.309 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:29.309 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:31.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 [2024-12-09 10:26:08.899572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.458 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.388 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.388 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:32.388 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.388 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:32.388 10:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 [2024-12-09 10:26:12.208408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.909 10:26:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.838 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.838 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:35.838 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.838 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:35.838 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:37.726 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:37.726 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:37.726 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.727 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:37.727 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.727 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:37.727 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.981 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 [2024-12-09 10:26:15.610945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 [2024-12-09 10:26:15.659064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.982 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 [2024-12-09 10:26:15.707202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 [2024-12-09 10:26:15.755381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 [2024-12-09 10:26:15.803550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:38.239 "tick_rate": 2100000000, 00:15:38.239 "poll_groups": [ 00:15:38.239 { 00:15:38.239 "name": "nvmf_tgt_poll_group_000", 00:15:38.239 "admin_qpairs": 2, 00:15:38.239 "io_qpairs": 168, 00:15:38.239 "current_admin_qpairs": 0, 00:15:38.239 "current_io_qpairs": 0, 00:15:38.239 "pending_bdev_io": 0, 00:15:38.239 "completed_nvme_io": 268, 00:15:38.239 "transports": [ 00:15:38.239 { 00:15:38.239 "trtype": "TCP" 00:15:38.239 } 00:15:38.239 ] 00:15:38.239 }, 00:15:38.239 { 00:15:38.239 "name": "nvmf_tgt_poll_group_001", 00:15:38.239 "admin_qpairs": 2, 00:15:38.239 "io_qpairs": 168, 00:15:38.239 "current_admin_qpairs": 0, 00:15:38.239 "current_io_qpairs": 0, 00:15:38.239 "pending_bdev_io": 0, 00:15:38.239 "completed_nvme_io": 268, 00:15:38.239 "transports": [ 00:15:38.239 { 00:15:38.239 "trtype": "TCP" 00:15:38.239 } 00:15:38.239 ] 00:15:38.239 }, 00:15:38.239 { 00:15:38.239 "name": "nvmf_tgt_poll_group_002", 00:15:38.239 "admin_qpairs": 1, 00:15:38.239 "io_qpairs": 168, 00:15:38.239 "current_admin_qpairs": 0, 00:15:38.239 "current_io_qpairs": 0, 00:15:38.239 "pending_bdev_io": 0, 00:15:38.239 "completed_nvme_io": 316, 00:15:38.239 "transports": [ 00:15:38.239 { 00:15:38.239 "trtype": "TCP" 00:15:38.239 } 00:15:38.239 ] 00:15:38.239 }, 00:15:38.239 { 00:15:38.239 "name": "nvmf_tgt_poll_group_003", 00:15:38.239 "admin_qpairs": 2, 00:15:38.239 "io_qpairs": 168, 00:15:38.240 "current_admin_qpairs": 0, 00:15:38.240 "current_io_qpairs": 0, 00:15:38.240 "pending_bdev_io": 0, 00:15:38.240 "completed_nvme_io": 170, 00:15:38.240 "transports": [ 00:15:38.240 { 00:15:38.240 "trtype": "TCP" 00:15:38.240 } 00:15:38.240 ] 00:15:38.240 } 00:15:38.240 ] 00:15:38.240 }' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.240 10:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.240 rmmod nvme_tcp 00:15:38.497 rmmod nvme_fabrics 00:15:38.497 rmmod nvme_keyring 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2593130 ']' 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2593130 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2593130 ']' 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2593130 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593130 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593130' 00:15:38.497 killing process with pid 2593130 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2593130 00:15:38.497 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2593130 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.755 10:26:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:40.661 00:15:40.661 real 0m33.588s 00:15:40.661 user 1m42.111s 00:15:40.661 sys 0m6.497s 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.661 ************************************ 00:15:40.661 END TEST nvmf_rpc 00:15:40.661 ************************************ 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.661 10:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.662 10:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.921 ************************************ 00:15:40.921 START TEST nvmf_invalid 00:15:40.921 ************************************ 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:40.921 * Looking for test storage... 00:15:40.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.921 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.922 --rc genhtml_branch_coverage=1 00:15:40.922 --rc genhtml_function_coverage=1 00:15:40.922 --rc genhtml_legend=1 00:15:40.922 --rc geninfo_all_blocks=1 00:15:40.922 --rc geninfo_unexecuted_blocks=1 00:15:40.922 00:15:40.922 ' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.922 --rc genhtml_branch_coverage=1 00:15:40.922 --rc genhtml_function_coverage=1 00:15:40.922 --rc genhtml_legend=1 00:15:40.922 --rc geninfo_all_blocks=1 00:15:40.922 --rc geninfo_unexecuted_blocks=1 00:15:40.922 00:15:40.922 ' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.922 --rc genhtml_branch_coverage=1 00:15:40.922 --rc genhtml_function_coverage=1 00:15:40.922 --rc genhtml_legend=1 00:15:40.922 --rc geninfo_all_blocks=1 00:15:40.922 --rc geninfo_unexecuted_blocks=1 00:15:40.922 00:15:40.922 ' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.922 --rc genhtml_branch_coverage=1 00:15:40.922 --rc genhtml_function_coverage=1 00:15:40.922 --rc genhtml_legend=1 00:15:40.922 --rc geninfo_all_blocks=1 00:15:40.922 --rc geninfo_unexecuted_blocks=1 00:15:40.922 00:15:40.922 ' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.922 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.923 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:47.496 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:47.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:47.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:47.497 Found net devices under 0000:86:00.0: cvl_0_0 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:47.497 Found net devices under 0000:86:00.1: cvl_0_1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:47.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:15:47.497 00:15:47.497 --- 10.0.0.2 ping statistics --- 00:15:47.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.497 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:15:47.497 00:15:47.497 --- 10.0.0.1 ping statistics --- 00:15:47.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.497 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2600961 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2600961 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2600961 ']' 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.497 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.498 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.498 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.498 [2024-12-09 10:26:24.607038] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:15:47.498 [2024-12-09 10:26:24.607080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.498 [2024-12-09 10:26:24.685837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.498 [2024-12-09 10:26:24.728578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.498 [2024-12-09 10:26:24.728614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.498 [2024-12-09 10:26:24.728622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.498 [2024-12-09 10:26:24.728628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.498 [2024-12-09 10:26:24.728634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.498 [2024-12-09 10:26:24.730068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.498 [2024-12-09 10:26:24.730095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.498 [2024-12-09 10:26:24.730202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.498 [2024-12-09 10:26:24.730203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.755 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.755 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:47.755 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.755 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.755 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10104 00:15:48.010 [2024-12-09 10:26:25.671305] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:48.010 { 00:15:48.010 "nqn": "nqn.2016-06.io.spdk:cnode10104", 00:15:48.010 "tgt_name": "foobar", 00:15:48.010 "method": "nvmf_create_subsystem", 00:15:48.010 "req_id": 1 00:15:48.010 } 00:15:48.010 Got JSON-RPC error response 00:15:48.010 response: 00:15:48.010 { 00:15:48.010 "code": -32603, 00:15:48.010 "message": "Unable to find target foobar" 00:15:48.010 }' 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:48.010 { 00:15:48.010 "nqn": "nqn.2016-06.io.spdk:cnode10104", 00:15:48.010 "tgt_name": "foobar", 00:15:48.010 "method": "nvmf_create_subsystem", 00:15:48.010 "req_id": 1 00:15:48.010 } 00:15:48.010 Got JSON-RPC error response 00:15:48.010 response: 00:15:48.010 { 00:15:48.010 "code": -32603, 00:15:48.010 "message": "Unable to find target foobar" 00:15:48.010 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:48.010 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5837 00:15:48.265 [2024-12-09 10:26:25.880039] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5837: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:48.265 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:48.265 { 00:15:48.265 "nqn": "nqn.2016-06.io.spdk:cnode5837", 00:15:48.265 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.265 "method": "nvmf_create_subsystem", 00:15:48.265 "req_id": 1 00:15:48.265 } 00:15:48.265 Got JSON-RPC error response 00:15:48.265 response: 00:15:48.265 { 00:15:48.265 "code": -32602, 00:15:48.265 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.265 }' 00:15:48.265 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:48.265 { 00:15:48.265 "nqn": "nqn.2016-06.io.spdk:cnode5837", 00:15:48.265 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.265 "method": "nvmf_create_subsystem", 00:15:48.265 "req_id": 1 00:15:48.265 } 00:15:48.265 Got JSON-RPC error response 00:15:48.265 response: 00:15:48.265 { 00:15:48.265 "code": -32602, 00:15:48.265 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.265 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.265 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:48.265 10:26:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21798 00:15:48.520 [2024-12-09 10:26:26.076638] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21798: invalid model number 'SPDK_Controller' 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:48.520 { 00:15:48.520 "nqn": "nqn.2016-06.io.spdk:cnode21798", 00:15:48.520 "model_number": "SPDK_Controller\u001f", 00:15:48.520 "method": "nvmf_create_subsystem", 00:15:48.520 "req_id": 1 00:15:48.520 } 00:15:48.520 Got JSON-RPC error response 00:15:48.520 response: 00:15:48.520 { 00:15:48.520 "code": -32602, 00:15:48.520 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.520 }' 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:48.520 { 00:15:48.520 "nqn": "nqn.2016-06.io.spdk:cnode21798", 00:15:48.520 "model_number": "SPDK_Controller\u001f", 00:15:48.520 "method": "nvmf_create_subsystem", 00:15:48.520 "req_id": 1 00:15:48.520 } 00:15:48.520 Got JSON-RPC error response 00:15:48.520 response: 00:15:48.520 { 00:15:48.520 "code": -32602, 00:15:48.520 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.520 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:48.520 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:48.521 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g3:r_2MEFv59L,W2&s#r0' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g3:r_2MEFv59L,W2&s#r0' nqn.2016-06.io.spdk:cnode8801 00:15:48.777 [2024-12-09 10:26:26.413742] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8801: invalid serial number 'g3:r_2MEFv59L,W2&s#r0' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:48.777 { 00:15:48.777 "nqn": "nqn.2016-06.io.spdk:cnode8801", 00:15:48.777 "serial_number": "g3:r_2MEFv59L,W2&s#r0", 00:15:48.777 "method": "nvmf_create_subsystem", 00:15:48.777 "req_id": 1 00:15:48.777 } 00:15:48.777 Got JSON-RPC error response 00:15:48.777 response: 00:15:48.777 { 00:15:48.777 "code": -32602, 00:15:48.777 "message": "Invalid SN g3:r_2MEFv59L,W2&s#r0" 00:15:48.777 }' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:48.777 { 00:15:48.777 "nqn": "nqn.2016-06.io.spdk:cnode8801", 00:15:48.777 "serial_number": "g3:r_2MEFv59L,W2&s#r0", 00:15:48.777 "method": "nvmf_create_subsystem", 00:15:48.777 "req_id": 1 00:15:48.777 } 00:15:48.777 Got JSON-RPC error response 00:15:48.777 response: 00:15:48.777 { 00:15:48.777 "code": -32602, 00:15:48.777 "message": "Invalid SN g3:r_2MEFv59L,W2&s#r0" 00:15:48.777 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:48.777 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.778 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:49.034 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:49.035 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-y76w}>'\''(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'\''DzJs' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-y76w}>'\''(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'\''DzJs' 00:15:49.036 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\-y76w}>'\''(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'\''DzJs' nqn.2016-06.io.spdk:cnode9014 00:15:49.292 [2024-12-09 10:26:26.895344] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9014: invalid model number '\-y76w}>'(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'DzJs' 00:15:49.292 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:49.292 { 00:15:49.292 "nqn": "nqn.2016-06.io.spdk:cnode9014", 00:15:49.292 "model_number": "\\-y76w}>'\''(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'\''DzJs", 00:15:49.292 "method": "nvmf_create_subsystem", 00:15:49.292 "req_id": 1 00:15:49.292 } 00:15:49.292 Got JSON-RPC error response 00:15:49.292 response: 00:15:49.292 { 00:15:49.292 "code": -32602, 00:15:49.292 "message": "Invalid MN \\-y76w}>'\''(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'\''DzJs" 00:15:49.292 }' 00:15:49.292 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:49.292 { 00:15:49.292 "nqn": "nqn.2016-06.io.spdk:cnode9014", 00:15:49.292 "model_number": "\\-y76w}>'(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'DzJs", 00:15:49.292 "method": "nvmf_create_subsystem", 00:15:49.292 "req_id": 1 00:15:49.292 } 00:15:49.292 Got JSON-RPC error response 00:15:49.292 response: 00:15:49.292 { 00:15:49.292 "code": -32602, 00:15:49.292 "message": "Invalid MN \\-y76w}>'(j]?A-1Mt$rJW,4ZBHJK}GC7/ksi'DzJs" 00:15:49.292 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:49.292 10:26:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:49.553 [2024-12-09 10:26:27.104104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.553 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:49.810 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:49.810 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:49.810 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:49.810 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:49.810 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:49.810 [2024-12-09 10:26:27.509447] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:50.067 { 00:15:50.067 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:50.067 "listen_address": { 00:15:50.067 "trtype": "tcp", 00:15:50.067 "traddr": "", 00:15:50.067 "trsvcid": "4421" 00:15:50.067 }, 00:15:50.067 "method": "nvmf_subsystem_remove_listener", 00:15:50.067 "req_id": 1 00:15:50.067 } 00:15:50.067 Got JSON-RPC error response 00:15:50.067 response: 00:15:50.067 { 00:15:50.067 "code": -32602, 00:15:50.067 "message": "Invalid parameters" 00:15:50.067 }' 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:50.067 { 00:15:50.067 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:50.067 "listen_address": { 00:15:50.067 "trtype": "tcp", 00:15:50.067 "traddr": "", 00:15:50.067 "trsvcid": "4421" 00:15:50.067 }, 00:15:50.067 "method": "nvmf_subsystem_remove_listener", 00:15:50.067 "req_id": 1 00:15:50.067 } 00:15:50.067 Got JSON-RPC error response 00:15:50.067 response: 00:15:50.067 { 00:15:50.067 "code": -32602, 00:15:50.067 "message": "Invalid parameters" 00:15:50.067 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2320 -i 0 00:15:50.067 [2024-12-09 10:26:27.706056] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2320: invalid cntlid range [0-65519] 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:50.067 { 00:15:50.067 "nqn": "nqn.2016-06.io.spdk:cnode2320", 00:15:50.067 "min_cntlid": 0, 00:15:50.067 "method": "nvmf_create_subsystem", 00:15:50.067 "req_id": 1 00:15:50.067 } 00:15:50.067 Got JSON-RPC error response 00:15:50.067 response: 00:15:50.067 { 00:15:50.067 "code": -32602, 00:15:50.067 "message": "Invalid cntlid range [0-65519]" 00:15:50.067 }' 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:50.067 { 00:15:50.067 "nqn": "nqn.2016-06.io.spdk:cnode2320", 00:15:50.067 "min_cntlid": 0, 00:15:50.067 "method": "nvmf_create_subsystem", 00:15:50.067 "req_id": 1 00:15:50.067 } 00:15:50.067 Got JSON-RPC error response 00:15:50.067 response: 00:15:50.067 { 00:15:50.067 "code": -32602, 00:15:50.067 "message": "Invalid cntlid range [0-65519]" 00:15:50.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.067 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12027 -i 65520 00:15:50.323 [2024-12-09 10:26:27.898691] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12027: invalid cntlid range [65520-65519] 00:15:50.323 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:50.323 { 00:15:50.323 "nqn": "nqn.2016-06.io.spdk:cnode12027", 00:15:50.323 "min_cntlid": 65520, 00:15:50.323 "method": "nvmf_create_subsystem", 00:15:50.323 "req_id": 1 00:15:50.323 } 00:15:50.323 Got JSON-RPC error response 00:15:50.323 response: 00:15:50.323 { 00:15:50.323 "code": -32602, 00:15:50.323 "message": "Invalid cntlid range [65520-65519]" 00:15:50.323 }' 00:15:50.323 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:50.323 { 00:15:50.323 "nqn": "nqn.2016-06.io.spdk:cnode12027", 00:15:50.323 "min_cntlid": 65520, 00:15:50.323 "method": "nvmf_create_subsystem", 00:15:50.323 "req_id": 1 00:15:50.323 } 00:15:50.323 Got JSON-RPC error response 00:15:50.323 response: 00:15:50.323 { 00:15:50.323 "code": -32602, 00:15:50.323 "message": "Invalid cntlid range [65520-65519]" 00:15:50.323 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.323 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14327 -I 0 00:15:50.579 [2024-12-09 10:26:28.107409] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14327: invalid cntlid range [1-0] 00:15:50.579 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:50.579 { 00:15:50.579 "nqn": "nqn.2016-06.io.spdk:cnode14327", 00:15:50.579 "max_cntlid": 0, 00:15:50.579 "method": "nvmf_create_subsystem", 00:15:50.579 "req_id": 1 00:15:50.579 } 00:15:50.579 Got JSON-RPC error response 00:15:50.579 response: 00:15:50.579 { 00:15:50.579 "code": -32602, 00:15:50.579 "message": "Invalid cntlid range [1-0]" 00:15:50.579 }' 00:15:50.579 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:50.579 { 00:15:50.579 "nqn": "nqn.2016-06.io.spdk:cnode14327", 00:15:50.579 "max_cntlid": 0, 00:15:50.579 "method": "nvmf_create_subsystem", 00:15:50.579 "req_id": 1 00:15:50.579 } 00:15:50.579 Got JSON-RPC error response 00:15:50.579 response: 00:15:50.579 { 00:15:50.579 "code": -32602, 00:15:50.579 "message": "Invalid cntlid range [1-0]" 00:15:50.579 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.579 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10734 -I 65520 00:15:50.835 [2024-12-09 10:26:28.320105] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10734: invalid cntlid range [1-65520] 00:15:50.835 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2016-06.io.spdk:cnode10734", 00:15:50.835 "max_cntlid": 65520, 00:15:50.835 "method": "nvmf_create_subsystem", 00:15:50.835 "req_id": 1 00:15:50.835 } 00:15:50.835 Got JSON-RPC error response 00:15:50.835 response: 00:15:50.835 { 00:15:50.835 "code": -32602, 00:15:50.835 "message": "Invalid cntlid range [1-65520]" 00:15:50.835 }' 00:15:50.835 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2016-06.io.spdk:cnode10734", 00:15:50.835 "max_cntlid": 65520, 00:15:50.835 "method": "nvmf_create_subsystem", 00:15:50.835 "req_id": 1 00:15:50.835 } 00:15:50.835 Got JSON-RPC error response 00:15:50.835 response: 00:15:50.835 { 00:15:50.835 "code": -32602, 00:15:50.835 "message": "Invalid cntlid range [1-65520]" 00:15:50.835 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.835 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15528 -i 6 -I 5 00:15:50.835 [2024-12-09 10:26:28.520742] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15528: invalid cntlid range [6-5] 00:15:50.835 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:15:50.835 "min_cntlid": 6, 00:15:50.835 "max_cntlid": 5, 00:15:50.835 "method": "nvmf_create_subsystem", 00:15:50.835 "req_id": 1 00:15:50.835 } 00:15:50.835 Got JSON-RPC error response 00:15:50.835 response: 00:15:50.835 { 00:15:50.835 "code": -32602, 00:15:50.835 "message": "Invalid cntlid range [6-5]" 00:15:50.835 }' 00:15:50.835 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:15:50.835 "min_cntlid": 6, 00:15:50.835 "max_cntlid": 5, 00:15:50.835 "method": "nvmf_create_subsystem", 00:15:50.835 "req_id": 1 00:15:50.835 } 00:15:50.835 Got JSON-RPC error response 00:15:50.835 response: 00:15:50.835 { 00:15:50.835 "code": -32602, 00:15:50.835 "message": "Invalid cntlid range [6-5]" 00:15:50.835 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.836 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:51.093 { 00:15:51.093 "name": "foobar", 00:15:51.093 "method": "nvmf_delete_target", 00:15:51.093 "req_id": 1 00:15:51.093 } 00:15:51.093 Got JSON-RPC error response 00:15:51.093 response: 00:15:51.093 { 00:15:51.093 "code": -32602, 00:15:51.093 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:51.093 }' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:51.093 { 00:15:51.093 "name": "foobar", 00:15:51.093 "method": "nvmf_delete_target", 00:15:51.093 "req_id": 1 00:15:51.093 } 00:15:51.093 Got JSON-RPC error response 00:15:51.093 response: 00:15:51.093 { 00:15:51.093 "code": -32602, 00:15:51.093 "message": "The specified target doesn't exist, cannot delete it." 00:15:51.093 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.093 rmmod nvme_tcp 00:15:51.093 rmmod nvme_fabrics 00:15:51.093 rmmod nvme_keyring 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2600961 ']' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2600961 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2600961 ']' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2600961 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600961 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600961' 00:15:51.093 killing process with pid 2600961 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2600961 00:15:51.093 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2600961 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.352 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.366 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:53.366 00:15:53.366 real 0m12.601s 00:15:53.366 user 0m21.245s 00:15:53.366 sys 0m5.332s 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:53.366 ************************************ 00:15:53.366 END TEST nvmf_invalid 00:15:53.366 ************************************ 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.366 ************************************ 00:15:53.366 START TEST nvmf_connect_stress 00:15:53.366 ************************************ 00:15:53.366 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.625 * Looking for test storage... 00:15:53.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.625 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.626 --rc genhtml_branch_coverage=1 00:15:53.626 --rc genhtml_function_coverage=1 00:15:53.626 --rc genhtml_legend=1 00:15:53.626 --rc geninfo_all_blocks=1 00:15:53.626 --rc geninfo_unexecuted_blocks=1 00:15:53.626 00:15:53.626 ' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.626 --rc genhtml_branch_coverage=1 00:15:53.626 --rc genhtml_function_coverage=1 00:15:53.626 --rc genhtml_legend=1 00:15:53.626 --rc geninfo_all_blocks=1 00:15:53.626 --rc geninfo_unexecuted_blocks=1 00:15:53.626 00:15:53.626 ' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.626 --rc genhtml_branch_coverage=1 00:15:53.626 --rc genhtml_function_coverage=1 00:15:53.626 --rc genhtml_legend=1 00:15:53.626 --rc geninfo_all_blocks=1 00:15:53.626 --rc geninfo_unexecuted_blocks=1 00:15:53.626 00:15:53.626 ' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.626 --rc genhtml_branch_coverage=1 00:15:53.626 --rc genhtml_function_coverage=1 00:15:53.626 --rc genhtml_legend=1 00:15:53.626 --rc geninfo_all_blocks=1 00:15:53.626 --rc geninfo_unexecuted_blocks=1 00:15:53.626 00:15:53.626 ' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.626 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:00.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:00.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:00.189 Found net devices under 0000:86:00.0: cvl_0_0 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:00.189 Found net devices under 0000:86:00.1: cvl_0_1 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.189 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.189 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.189 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.189 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:16:00.190 00:16:00.190 --- 10.0.0.2 ping statistics --- 00:16:00.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.190 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:16:00.190 00:16:00.190 --- 10.0.0.1 ping statistics --- 00:16:00.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.190 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2605366 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2605366 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2605366 ']' 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 [2024-12-09 10:26:37.292637] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:00.190 [2024-12-09 10:26:37.292688] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.190 [2024-12-09 10:26:37.371601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.190 [2024-12-09 10:26:37.413474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.190 [2024-12-09 10:26:37.413508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.190 [2024-12-09 10:26:37.413514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.190 [2024-12-09 10:26:37.413521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.190 [2024-12-09 10:26:37.413526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.190 [2024-12-09 10:26:37.414914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.190 [2024-12-09 10:26:37.415020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.190 [2024-12-09 10:26:37.415020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 [2024-12-09 10:26:37.552419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 [2024-12-09 10:26:37.576640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.190 NULL1 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2605387 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.190 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.191 10:26:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.447 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.447 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:00.447 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.447 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.447 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.704 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.704 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:00.704 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.704 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.704 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.962 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.962 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:00.962 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.962 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.962 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.526 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.526 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:01.526 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.526 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.526 10:26:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.783 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.783 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:01.783 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.783 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.783 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.040 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.040 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:02.040 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.040 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.040 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.297 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.297 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:02.297 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.297 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.297 10:26:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.875 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.875 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:02.875 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.875 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.875 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.132 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.132 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:03.132 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.132 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.132 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.388 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.388 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:03.388 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.388 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.388 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.645 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.645 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:03.645 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.645 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.645 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.902 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.902 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:03.902 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.902 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.902 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.466 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.466 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:04.466 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.466 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.466 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.723 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.723 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:04.723 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.723 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.723 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.980 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.980 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:04.980 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.980 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.980 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.236 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.236 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:05.236 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.236 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.236 10:26:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.493 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.493 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:05.493 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.493 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.493 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.059 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.059 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:06.059 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.059 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.059 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.316 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.316 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:06.316 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.316 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.316 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.573 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.573 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:06.573 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.573 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.573 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.831 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.831 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:06.831 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.831 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.831 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.396 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.396 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:07.396 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.396 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.396 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.652 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.652 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:07.652 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.652 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.652 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.909 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:07.909 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.909 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.909 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.166 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.166 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:08.166 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.166 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.166 10:26:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.731 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.731 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:08.731 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.731 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.731 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.989 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.989 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:08.989 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.989 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.989 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.246 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.246 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:09.246 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.246 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.246 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.503 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.503 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:09.504 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.504 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.504 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.761 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:09.761 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.761 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.761 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.019 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2605387 00:16:10.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2605387) - No such process 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2605387 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.277 rmmod nvme_tcp 00:16:10.277 rmmod nvme_fabrics 00:16:10.277 rmmod nvme_keyring 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2605366 ']' 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2605366 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2605366 ']' 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2605366 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2605366 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2605366' 00:16:10.277 killing process with pid 2605366 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2605366 00:16:10.277 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2605366 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.536 10:26:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.437 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:12.437 00:16:12.437 real 0m19.044s 00:16:12.437 user 0m39.393s 00:16:12.437 sys 0m8.489s 00:16:12.437 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.437 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.437 ************************************ 00:16:12.437 END TEST nvmf_connect_stress 00:16:12.437 ************************************ 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.696 ************************************ 00:16:12.696 START TEST nvmf_fused_ordering 00:16:12.696 ************************************ 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:12.696 * Looking for test storage... 00:16:12.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.696 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:12.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.697 --rc genhtml_branch_coverage=1 00:16:12.697 --rc genhtml_function_coverage=1 00:16:12.697 --rc genhtml_legend=1 00:16:12.697 --rc geninfo_all_blocks=1 00:16:12.697 --rc geninfo_unexecuted_blocks=1 00:16:12.697 00:16:12.697 ' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:12.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.697 --rc genhtml_branch_coverage=1 00:16:12.697 --rc genhtml_function_coverage=1 00:16:12.697 --rc genhtml_legend=1 00:16:12.697 --rc geninfo_all_blocks=1 00:16:12.697 --rc geninfo_unexecuted_blocks=1 00:16:12.697 00:16:12.697 ' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:12.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.697 --rc genhtml_branch_coverage=1 00:16:12.697 --rc genhtml_function_coverage=1 00:16:12.697 --rc genhtml_legend=1 00:16:12.697 --rc geninfo_all_blocks=1 00:16:12.697 --rc geninfo_unexecuted_blocks=1 00:16:12.697 00:16:12.697 ' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:12.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.697 --rc genhtml_branch_coverage=1 00:16:12.697 --rc genhtml_function_coverage=1 00:16:12.697 --rc genhtml_legend=1 00:16:12.697 --rc geninfo_all_blocks=1 00:16:12.697 --rc geninfo_unexecuted_blocks=1 00:16:12.697 00:16:12.697 ' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:12.697 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:12.698 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.698 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.698 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.956 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:12.956 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:12.956 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:12.956 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.519 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.519 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.519 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.520 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.520 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:16:19.520 00:16:19.520 --- 10.0.0.2 ping statistics --- 00:16:19.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.520 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:16:19.520 00:16:19.520 --- 10.0.0.1 ping statistics --- 00:16:19.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.520 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2610544 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2610544 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2610544 ']' 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 [2024-12-09 10:26:56.381075] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:19.520 [2024-12-09 10:26:56.381124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.520 [2024-12-09 10:26:56.459542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.520 [2024-12-09 10:26:56.500485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.520 [2024-12-09 10:26:56.500520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.520 [2024-12-09 10:26:56.500527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.520 [2024-12-09 10:26:56.500533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.520 [2024-12-09 10:26:56.500538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.520 [2024-12-09 10:26:56.501114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 [2024-12-09 10:26:56.649833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 [2024-12-09 10:26:56.670038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 NULL1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.520 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:19.520 [2024-12-09 10:26:56.727031] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:19.520 [2024-12-09 10:26:56.727061] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610711 ] 00:16:19.520 Attached to nqn.2016-06.io.spdk:cnode1 00:16:19.520 Namespace ID: 1 size: 1GB 00:16:19.520 fused_ordering(0) 00:16:19.520 fused_ordering(1) 00:16:19.520 fused_ordering(2) 00:16:19.520 fused_ordering(3) 00:16:19.520 fused_ordering(4) 00:16:19.520 fused_ordering(5) 00:16:19.520 fused_ordering(6) 00:16:19.520 fused_ordering(7) 00:16:19.520 fused_ordering(8) 00:16:19.520 fused_ordering(9) 00:16:19.520 fused_ordering(10) 00:16:19.520 fused_ordering(11) 00:16:19.520 fused_ordering(12) 00:16:19.520 fused_ordering(13) 00:16:19.520 fused_ordering(14) 00:16:19.520 fused_ordering(15) 00:16:19.520 fused_ordering(16) 00:16:19.520 fused_ordering(17) 00:16:19.520 fused_ordering(18) 00:16:19.520 fused_ordering(19) 00:16:19.520 fused_ordering(20) 00:16:19.520 fused_ordering(21) 00:16:19.520 fused_ordering(22) 00:16:19.520 fused_ordering(23) 00:16:19.520 fused_ordering(24) 00:16:19.520 fused_ordering(25) 00:16:19.520 fused_ordering(26) 00:16:19.520 fused_ordering(27) 00:16:19.520 fused_ordering(28) 00:16:19.520 fused_ordering(29) 00:16:19.520 fused_ordering(30) 00:16:19.520 fused_ordering(31) 00:16:19.520 fused_ordering(32) 00:16:19.520 fused_ordering(33) 00:16:19.520 fused_ordering(34) 00:16:19.520 fused_ordering(35) 00:16:19.520 fused_ordering(36) 00:16:19.520 fused_ordering(37) 00:16:19.520 fused_ordering(38) 00:16:19.520 fused_ordering(39) 00:16:19.520 fused_ordering(40) 00:16:19.520 fused_ordering(41) 00:16:19.520 fused_ordering(42) 00:16:19.520 fused_ordering(43) 00:16:19.520 fused_ordering(44) 00:16:19.520 fused_ordering(45) 00:16:19.520 fused_ordering(46) 00:16:19.520 fused_ordering(47) 00:16:19.520 fused_ordering(48) 00:16:19.521 fused_ordering(49) 00:16:19.521 fused_ordering(50) 00:16:19.521 fused_ordering(51) 00:16:19.521 fused_ordering(52) 00:16:19.521 fused_ordering(53) 00:16:19.521 fused_ordering(54) 00:16:19.521 fused_ordering(55) 00:16:19.521 fused_ordering(56) 00:16:19.521 fused_ordering(57) 00:16:19.521 fused_ordering(58) 00:16:19.521 fused_ordering(59) 00:16:19.521 fused_ordering(60) 00:16:19.521 fused_ordering(61) 00:16:19.521 fused_ordering(62) 00:16:19.521 fused_ordering(63) 00:16:19.521 fused_ordering(64) 00:16:19.521 fused_ordering(65) 00:16:19.521 fused_ordering(66) 00:16:19.521 fused_ordering(67) 00:16:19.521 fused_ordering(68) 00:16:19.521 fused_ordering(69) 00:16:19.521 fused_ordering(70) 00:16:19.521 fused_ordering(71) 00:16:19.521 fused_ordering(72) 00:16:19.521 fused_ordering(73) 00:16:19.521 fused_ordering(74) 00:16:19.521 fused_ordering(75) 00:16:19.521 fused_ordering(76) 00:16:19.521 fused_ordering(77) 00:16:19.521 fused_ordering(78) 00:16:19.521 fused_ordering(79) 00:16:19.521 fused_ordering(80) 00:16:19.521 fused_ordering(81) 00:16:19.521 fused_ordering(82) 00:16:19.521 fused_ordering(83) 00:16:19.521 fused_ordering(84) 00:16:19.521 fused_ordering(85) 00:16:19.521 fused_ordering(86) 00:16:19.521 fused_ordering(87) 00:16:19.521 fused_ordering(88) 00:16:19.521 fused_ordering(89) 00:16:19.521 fused_ordering(90) 00:16:19.521 fused_ordering(91) 00:16:19.521 fused_ordering(92) 00:16:19.521 fused_ordering(93) 00:16:19.521 fused_ordering(94) 00:16:19.521 fused_ordering(95) 00:16:19.521 fused_ordering(96) 00:16:19.521 fused_ordering(97) 00:16:19.521 fused_ordering(98) 00:16:19.521 fused_ordering(99) 00:16:19.521 fused_ordering(100) 00:16:19.521 fused_ordering(101) 00:16:19.521 fused_ordering(102) 00:16:19.521 fused_ordering(103) 00:16:19.521 fused_ordering(104) 00:16:19.521 fused_ordering(105) 00:16:19.521 fused_ordering(106) 00:16:19.521 fused_ordering(107) 00:16:19.521 fused_ordering(108) 00:16:19.521 fused_ordering(109) 00:16:19.521 fused_ordering(110) 00:16:19.521 fused_ordering(111) 00:16:19.521 fused_ordering(112) 00:16:19.521 fused_ordering(113) 00:16:19.521 fused_ordering(114) 00:16:19.521 fused_ordering(115) 00:16:19.521 fused_ordering(116) 00:16:19.521 fused_ordering(117) 00:16:19.521 fused_ordering(118) 00:16:19.521 fused_ordering(119) 00:16:19.521 fused_ordering(120) 00:16:19.521 fused_ordering(121) 00:16:19.521 fused_ordering(122) 00:16:19.521 fused_ordering(123) 00:16:19.521 fused_ordering(124) 00:16:19.521 fused_ordering(125) 00:16:19.521 fused_ordering(126) 00:16:19.521 fused_ordering(127) 00:16:19.521 fused_ordering(128) 00:16:19.521 fused_ordering(129) 00:16:19.521 fused_ordering(130) 00:16:19.521 fused_ordering(131) 00:16:19.521 fused_ordering(132) 00:16:19.521 fused_ordering(133) 00:16:19.521 fused_ordering(134) 00:16:19.521 fused_ordering(135) 00:16:19.521 fused_ordering(136) 00:16:19.521 fused_ordering(137) 00:16:19.521 fused_ordering(138) 00:16:19.521 fused_ordering(139) 00:16:19.521 fused_ordering(140) 00:16:19.521 fused_ordering(141) 00:16:19.521 fused_ordering(142) 00:16:19.521 fused_ordering(143) 00:16:19.521 fused_ordering(144) 00:16:19.521 fused_ordering(145) 00:16:19.521 fused_ordering(146) 00:16:19.521 fused_ordering(147) 00:16:19.521 fused_ordering(148) 00:16:19.521 fused_ordering(149) 00:16:19.521 fused_ordering(150) 00:16:19.521 fused_ordering(151) 00:16:19.521 fused_ordering(152) 00:16:19.521 fused_ordering(153) 00:16:19.521 fused_ordering(154) 00:16:19.521 fused_ordering(155) 00:16:19.521 fused_ordering(156) 00:16:19.521 fused_ordering(157) 00:16:19.521 fused_ordering(158) 00:16:19.521 fused_ordering(159) 00:16:19.521 fused_ordering(160) 00:16:19.521 fused_ordering(161) 00:16:19.521 fused_ordering(162) 00:16:19.521 fused_ordering(163) 00:16:19.521 fused_ordering(164) 00:16:19.521 fused_ordering(165) 00:16:19.521 fused_ordering(166) 00:16:19.521 fused_ordering(167) 00:16:19.521 fused_ordering(168) 00:16:19.521 fused_ordering(169) 00:16:19.521 fused_ordering(170) 00:16:19.521 fused_ordering(171) 00:16:19.521 fused_ordering(172) 00:16:19.521 fused_ordering(173) 00:16:19.521 fused_ordering(174) 00:16:19.521 fused_ordering(175) 00:16:19.521 fused_ordering(176) 00:16:19.521 fused_ordering(177) 00:16:19.521 fused_ordering(178) 00:16:19.521 fused_ordering(179) 00:16:19.521 fused_ordering(180) 00:16:19.521 fused_ordering(181) 00:16:19.521 fused_ordering(182) 00:16:19.521 fused_ordering(183) 00:16:19.521 fused_ordering(184) 00:16:19.521 fused_ordering(185) 00:16:19.521 fused_ordering(186) 00:16:19.521 fused_ordering(187) 00:16:19.521 fused_ordering(188) 00:16:19.521 fused_ordering(189) 00:16:19.521 fused_ordering(190) 00:16:19.521 fused_ordering(191) 00:16:19.521 fused_ordering(192) 00:16:19.521 fused_ordering(193) 00:16:19.521 fused_ordering(194) 00:16:19.521 fused_ordering(195) 00:16:19.521 fused_ordering(196) 00:16:19.521 fused_ordering(197) 00:16:19.521 fused_ordering(198) 00:16:19.521 fused_ordering(199) 00:16:19.521 fused_ordering(200) 00:16:19.521 fused_ordering(201) 00:16:19.521 fused_ordering(202) 00:16:19.521 fused_ordering(203) 00:16:19.521 fused_ordering(204) 00:16:19.521 fused_ordering(205) 00:16:19.777 fused_ordering(206) 00:16:19.777 fused_ordering(207) 00:16:19.777 fused_ordering(208) 00:16:19.777 fused_ordering(209) 00:16:19.777 fused_ordering(210) 00:16:19.777 fused_ordering(211) 00:16:19.777 fused_ordering(212) 00:16:19.777 fused_ordering(213) 00:16:19.777 fused_ordering(214) 00:16:19.777 fused_ordering(215) 00:16:19.777 fused_ordering(216) 00:16:19.777 fused_ordering(217) 00:16:19.777 fused_ordering(218) 00:16:19.777 fused_ordering(219) 00:16:19.777 fused_ordering(220) 00:16:19.777 fused_ordering(221) 00:16:19.777 fused_ordering(222) 00:16:19.777 fused_ordering(223) 00:16:19.777 fused_ordering(224) 00:16:19.777 fused_ordering(225) 00:16:19.777 fused_ordering(226) 00:16:19.777 fused_ordering(227) 00:16:19.777 fused_ordering(228) 00:16:19.777 fused_ordering(229) 00:16:19.777 fused_ordering(230) 00:16:19.778 fused_ordering(231) 00:16:19.778 fused_ordering(232) 00:16:19.778 fused_ordering(233) 00:16:19.778 fused_ordering(234) 00:16:19.778 fused_ordering(235) 00:16:19.778 fused_ordering(236) 00:16:19.778 fused_ordering(237) 00:16:19.778 fused_ordering(238) 00:16:19.778 fused_ordering(239) 00:16:19.778 fused_ordering(240) 00:16:19.778 fused_ordering(241) 00:16:19.778 fused_ordering(242) 00:16:19.778 fused_ordering(243) 00:16:19.778 fused_ordering(244) 00:16:19.778 fused_ordering(245) 00:16:19.778 fused_ordering(246) 00:16:19.778 fused_ordering(247) 00:16:19.778 fused_ordering(248) 00:16:19.778 fused_ordering(249) 00:16:19.778 fused_ordering(250) 00:16:19.778 fused_ordering(251) 00:16:19.778 fused_ordering(252) 00:16:19.778 fused_ordering(253) 00:16:19.778 fused_ordering(254) 00:16:19.778 fused_ordering(255) 00:16:19.778 fused_ordering(256) 00:16:19.778 fused_ordering(257) 00:16:19.778 fused_ordering(258) 00:16:19.778 fused_ordering(259) 00:16:19.778 fused_ordering(260) 00:16:19.778 fused_ordering(261) 00:16:19.778 fused_ordering(262) 00:16:19.778 fused_ordering(263) 00:16:19.778 fused_ordering(264) 00:16:19.778 fused_ordering(265) 00:16:19.778 fused_ordering(266) 00:16:19.778 fused_ordering(267) 00:16:19.778 fused_ordering(268) 00:16:19.778 fused_ordering(269) 00:16:19.778 fused_ordering(270) 00:16:19.778 fused_ordering(271) 00:16:19.778 fused_ordering(272) 00:16:19.778 fused_ordering(273) 00:16:19.778 fused_ordering(274) 00:16:19.778 fused_ordering(275) 00:16:19.778 fused_ordering(276) 00:16:19.778 fused_ordering(277) 00:16:19.778 fused_ordering(278) 00:16:19.778 fused_ordering(279) 00:16:19.778 fused_ordering(280) 00:16:19.778 fused_ordering(281) 00:16:19.778 fused_ordering(282) 00:16:19.778 fused_ordering(283) 00:16:19.778 fused_ordering(284) 00:16:19.778 fused_ordering(285) 00:16:19.778 fused_ordering(286) 00:16:19.778 fused_ordering(287) 00:16:19.778 fused_ordering(288) 00:16:19.778 fused_ordering(289) 00:16:19.778 fused_ordering(290) 00:16:19.778 fused_ordering(291) 00:16:19.778 fused_ordering(292) 00:16:19.778 fused_ordering(293) 00:16:19.778 fused_ordering(294) 00:16:19.778 fused_ordering(295) 00:16:19.778 fused_ordering(296) 00:16:19.778 fused_ordering(297) 00:16:19.778 fused_ordering(298) 00:16:19.778 fused_ordering(299) 00:16:19.778 fused_ordering(300) 00:16:19.778 fused_ordering(301) 00:16:19.778 fused_ordering(302) 00:16:19.778 fused_ordering(303) 00:16:19.778 fused_ordering(304) 00:16:19.778 fused_ordering(305) 00:16:19.778 fused_ordering(306) 00:16:19.778 fused_ordering(307) 00:16:19.778 fused_ordering(308) 00:16:19.778 fused_ordering(309) 00:16:19.778 fused_ordering(310) 00:16:19.778 fused_ordering(311) 00:16:19.778 fused_ordering(312) 00:16:19.778 fused_ordering(313) 00:16:19.778 fused_ordering(314) 00:16:19.778 fused_ordering(315) 00:16:19.778 fused_ordering(316) 00:16:19.778 fused_ordering(317) 00:16:19.778 fused_ordering(318) 00:16:19.778 fused_ordering(319) 00:16:19.778 fused_ordering(320) 00:16:19.778 fused_ordering(321) 00:16:19.778 fused_ordering(322) 00:16:19.778 fused_ordering(323) 00:16:19.778 fused_ordering(324) 00:16:19.778 fused_ordering(325) 00:16:19.778 fused_ordering(326) 00:16:19.778 fused_ordering(327) 00:16:19.778 fused_ordering(328) 00:16:19.778 fused_ordering(329) 00:16:19.778 fused_ordering(330) 00:16:19.778 fused_ordering(331) 00:16:19.778 fused_ordering(332) 00:16:19.778 fused_ordering(333) 00:16:19.778 fused_ordering(334) 00:16:19.778 fused_ordering(335) 00:16:19.778 fused_ordering(336) 00:16:19.778 fused_ordering(337) 00:16:19.778 fused_ordering(338) 00:16:19.778 fused_ordering(339) 00:16:19.778 fused_ordering(340) 00:16:19.778 fused_ordering(341) 00:16:19.778 fused_ordering(342) 00:16:19.778 fused_ordering(343) 00:16:19.778 fused_ordering(344) 00:16:19.778 fused_ordering(345) 00:16:19.778 fused_ordering(346) 00:16:19.778 fused_ordering(347) 00:16:19.778 fused_ordering(348) 00:16:19.778 fused_ordering(349) 00:16:19.778 fused_ordering(350) 00:16:19.778 fused_ordering(351) 00:16:19.778 fused_ordering(352) 00:16:19.778 fused_ordering(353) 00:16:19.778 fused_ordering(354) 00:16:19.778 fused_ordering(355) 00:16:19.778 fused_ordering(356) 00:16:19.778 fused_ordering(357) 00:16:19.778 fused_ordering(358) 00:16:19.778 fused_ordering(359) 00:16:19.778 fused_ordering(360) 00:16:19.778 fused_ordering(361) 00:16:19.778 fused_ordering(362) 00:16:19.778 fused_ordering(363) 00:16:19.778 fused_ordering(364) 00:16:19.778 fused_ordering(365) 00:16:19.778 fused_ordering(366) 00:16:19.778 fused_ordering(367) 00:16:19.778 fused_ordering(368) 00:16:19.778 fused_ordering(369) 00:16:19.778 fused_ordering(370) 00:16:19.778 fused_ordering(371) 00:16:19.778 fused_ordering(372) 00:16:19.778 fused_ordering(373) 00:16:19.778 fused_ordering(374) 00:16:19.778 fused_ordering(375) 00:16:19.778 fused_ordering(376) 00:16:19.778 fused_ordering(377) 00:16:19.778 fused_ordering(378) 00:16:19.778 fused_ordering(379) 00:16:19.778 fused_ordering(380) 00:16:19.778 fused_ordering(381) 00:16:19.778 fused_ordering(382) 00:16:19.778 fused_ordering(383) 00:16:19.778 fused_ordering(384) 00:16:19.778 fused_ordering(385) 00:16:19.778 fused_ordering(386) 00:16:19.778 fused_ordering(387) 00:16:19.778 fused_ordering(388) 00:16:19.778 fused_ordering(389) 00:16:19.778 fused_ordering(390) 00:16:19.778 fused_ordering(391) 00:16:19.778 fused_ordering(392) 00:16:19.778 fused_ordering(393) 00:16:19.778 fused_ordering(394) 00:16:19.778 fused_ordering(395) 00:16:19.778 fused_ordering(396) 00:16:19.778 fused_ordering(397) 00:16:19.778 fused_ordering(398) 00:16:19.778 fused_ordering(399) 00:16:19.778 fused_ordering(400) 00:16:19.778 fused_ordering(401) 00:16:19.778 fused_ordering(402) 00:16:19.778 fused_ordering(403) 00:16:19.778 fused_ordering(404) 00:16:19.778 fused_ordering(405) 00:16:19.778 fused_ordering(406) 00:16:19.778 fused_ordering(407) 00:16:19.778 fused_ordering(408) 00:16:19.778 fused_ordering(409) 00:16:19.778 fused_ordering(410) 00:16:20.036 fused_ordering(411) 00:16:20.036 fused_ordering(412) 00:16:20.036 fused_ordering(413) 00:16:20.036 fused_ordering(414) 00:16:20.036 fused_ordering(415) 00:16:20.036 fused_ordering(416) 00:16:20.036 fused_ordering(417) 00:16:20.036 fused_ordering(418) 00:16:20.036 fused_ordering(419) 00:16:20.036 fused_ordering(420) 00:16:20.036 fused_ordering(421) 00:16:20.036 fused_ordering(422) 00:16:20.036 fused_ordering(423) 00:16:20.036 fused_ordering(424) 00:16:20.036 fused_ordering(425) 00:16:20.036 fused_ordering(426) 00:16:20.036 fused_ordering(427) 00:16:20.036 fused_ordering(428) 00:16:20.036 fused_ordering(429) 00:16:20.036 fused_ordering(430) 00:16:20.036 fused_ordering(431) 00:16:20.036 fused_ordering(432) 00:16:20.036 fused_ordering(433) 00:16:20.036 fused_ordering(434) 00:16:20.036 fused_ordering(435) 00:16:20.036 fused_ordering(436) 00:16:20.036 fused_ordering(437) 00:16:20.036 fused_ordering(438) 00:16:20.036 fused_ordering(439) 00:16:20.036 fused_ordering(440) 00:16:20.036 fused_ordering(441) 00:16:20.036 fused_ordering(442) 00:16:20.036 fused_ordering(443) 00:16:20.036 fused_ordering(444) 00:16:20.036 fused_ordering(445) 00:16:20.036 fused_ordering(446) 00:16:20.036 fused_ordering(447) 00:16:20.036 fused_ordering(448) 00:16:20.036 fused_ordering(449) 00:16:20.036 fused_ordering(450) 00:16:20.036 fused_ordering(451) 00:16:20.036 fused_ordering(452) 00:16:20.036 fused_ordering(453) 00:16:20.036 fused_ordering(454) 00:16:20.036 fused_ordering(455) 00:16:20.036 fused_ordering(456) 00:16:20.036 fused_ordering(457) 00:16:20.036 fused_ordering(458) 00:16:20.036 fused_ordering(459) 00:16:20.036 fused_ordering(460) 00:16:20.036 fused_ordering(461) 00:16:20.036 fused_ordering(462) 00:16:20.036 fused_ordering(463) 00:16:20.036 fused_ordering(464) 00:16:20.036 fused_ordering(465) 00:16:20.036 fused_ordering(466) 00:16:20.036 fused_ordering(467) 00:16:20.036 fused_ordering(468) 00:16:20.036 fused_ordering(469) 00:16:20.036 fused_ordering(470) 00:16:20.036 fused_ordering(471) 00:16:20.036 fused_ordering(472) 00:16:20.036 fused_ordering(473) 00:16:20.036 fused_ordering(474) 00:16:20.036 fused_ordering(475) 00:16:20.036 fused_ordering(476) 00:16:20.036 fused_ordering(477) 00:16:20.036 fused_ordering(478) 00:16:20.036 fused_ordering(479) 00:16:20.036 fused_ordering(480) 00:16:20.036 fused_ordering(481) 00:16:20.036 fused_ordering(482) 00:16:20.036 fused_ordering(483) 00:16:20.036 fused_ordering(484) 00:16:20.036 fused_ordering(485) 00:16:20.036 fused_ordering(486) 00:16:20.036 fused_ordering(487) 00:16:20.036 fused_ordering(488) 00:16:20.036 fused_ordering(489) 00:16:20.036 fused_ordering(490) 00:16:20.036 fused_ordering(491) 00:16:20.036 fused_ordering(492) 00:16:20.036 fused_ordering(493) 00:16:20.036 fused_ordering(494) 00:16:20.036 fused_ordering(495) 00:16:20.036 fused_ordering(496) 00:16:20.036 fused_ordering(497) 00:16:20.036 fused_ordering(498) 00:16:20.036 fused_ordering(499) 00:16:20.036 fused_ordering(500) 00:16:20.036 fused_ordering(501) 00:16:20.037 fused_ordering(502) 00:16:20.037 fused_ordering(503) 00:16:20.037 fused_ordering(504) 00:16:20.037 fused_ordering(505) 00:16:20.037 fused_ordering(506) 00:16:20.037 fused_ordering(507) 00:16:20.037 fused_ordering(508) 00:16:20.037 fused_ordering(509) 00:16:20.037 fused_ordering(510) 00:16:20.037 fused_ordering(511) 00:16:20.037 fused_ordering(512) 00:16:20.037 fused_ordering(513) 00:16:20.037 fused_ordering(514) 00:16:20.037 fused_ordering(515) 00:16:20.037 fused_ordering(516) 00:16:20.037 fused_ordering(517) 00:16:20.037 fused_ordering(518) 00:16:20.037 fused_ordering(519) 00:16:20.037 fused_ordering(520) 00:16:20.037 fused_ordering(521) 00:16:20.037 fused_ordering(522) 00:16:20.037 fused_ordering(523) 00:16:20.037 fused_ordering(524) 00:16:20.037 fused_ordering(525) 00:16:20.037 fused_ordering(526) 00:16:20.037 fused_ordering(527) 00:16:20.037 fused_ordering(528) 00:16:20.037 fused_ordering(529) 00:16:20.037 fused_ordering(530) 00:16:20.037 fused_ordering(531) 00:16:20.037 fused_ordering(532) 00:16:20.037 fused_ordering(533) 00:16:20.037 fused_ordering(534) 00:16:20.037 fused_ordering(535) 00:16:20.037 fused_ordering(536) 00:16:20.037 fused_ordering(537) 00:16:20.037 fused_ordering(538) 00:16:20.037 fused_ordering(539) 00:16:20.037 fused_ordering(540) 00:16:20.037 fused_ordering(541) 00:16:20.037 fused_ordering(542) 00:16:20.037 fused_ordering(543) 00:16:20.037 fused_ordering(544) 00:16:20.037 fused_ordering(545) 00:16:20.037 fused_ordering(546) 00:16:20.037 fused_ordering(547) 00:16:20.037 fused_ordering(548) 00:16:20.037 fused_ordering(549) 00:16:20.037 fused_ordering(550) 00:16:20.037 fused_ordering(551) 00:16:20.037 fused_ordering(552) 00:16:20.037 fused_ordering(553) 00:16:20.037 fused_ordering(554) 00:16:20.037 fused_ordering(555) 00:16:20.037 fused_ordering(556) 00:16:20.037 fused_ordering(557) 00:16:20.037 fused_ordering(558) 00:16:20.037 fused_ordering(559) 00:16:20.037 fused_ordering(560) 00:16:20.037 fused_ordering(561) 00:16:20.037 fused_ordering(562) 00:16:20.037 fused_ordering(563) 00:16:20.037 fused_ordering(564) 00:16:20.037 fused_ordering(565) 00:16:20.037 fused_ordering(566) 00:16:20.037 fused_ordering(567) 00:16:20.037 fused_ordering(568) 00:16:20.037 fused_ordering(569) 00:16:20.037 fused_ordering(570) 00:16:20.037 fused_ordering(571) 00:16:20.037 fused_ordering(572) 00:16:20.037 fused_ordering(573) 00:16:20.037 fused_ordering(574) 00:16:20.037 fused_ordering(575) 00:16:20.037 fused_ordering(576) 00:16:20.037 fused_ordering(577) 00:16:20.037 fused_ordering(578) 00:16:20.037 fused_ordering(579) 00:16:20.037 fused_ordering(580) 00:16:20.037 fused_ordering(581) 00:16:20.037 fused_ordering(582) 00:16:20.037 fused_ordering(583) 00:16:20.037 fused_ordering(584) 00:16:20.037 fused_ordering(585) 00:16:20.037 fused_ordering(586) 00:16:20.037 fused_ordering(587) 00:16:20.037 fused_ordering(588) 00:16:20.037 fused_ordering(589) 00:16:20.037 fused_ordering(590) 00:16:20.037 fused_ordering(591) 00:16:20.037 fused_ordering(592) 00:16:20.037 fused_ordering(593) 00:16:20.037 fused_ordering(594) 00:16:20.037 fused_ordering(595) 00:16:20.037 fused_ordering(596) 00:16:20.037 fused_ordering(597) 00:16:20.037 fused_ordering(598) 00:16:20.037 fused_ordering(599) 00:16:20.037 fused_ordering(600) 00:16:20.037 fused_ordering(601) 00:16:20.037 fused_ordering(602) 00:16:20.037 fused_ordering(603) 00:16:20.037 fused_ordering(604) 00:16:20.037 fused_ordering(605) 00:16:20.037 fused_ordering(606) 00:16:20.037 fused_ordering(607) 00:16:20.037 fused_ordering(608) 00:16:20.037 fused_ordering(609) 00:16:20.037 fused_ordering(610) 00:16:20.037 fused_ordering(611) 00:16:20.037 fused_ordering(612) 00:16:20.037 fused_ordering(613) 00:16:20.037 fused_ordering(614) 00:16:20.037 fused_ordering(615) 00:16:20.293 fused_ordering(616) 00:16:20.293 fused_ordering(617) 00:16:20.293 fused_ordering(618) 00:16:20.293 fused_ordering(619) 00:16:20.293 fused_ordering(620) 00:16:20.293 fused_ordering(621) 00:16:20.293 fused_ordering(622) 00:16:20.293 fused_ordering(623) 00:16:20.293 fused_ordering(624) 00:16:20.293 fused_ordering(625) 00:16:20.293 fused_ordering(626) 00:16:20.293 fused_ordering(627) 00:16:20.293 fused_ordering(628) 00:16:20.293 fused_ordering(629) 00:16:20.293 fused_ordering(630) 00:16:20.293 fused_ordering(631) 00:16:20.293 fused_ordering(632) 00:16:20.293 fused_ordering(633) 00:16:20.293 fused_ordering(634) 00:16:20.293 fused_ordering(635) 00:16:20.293 fused_ordering(636) 00:16:20.293 fused_ordering(637) 00:16:20.293 fused_ordering(638) 00:16:20.293 fused_ordering(639) 00:16:20.293 fused_ordering(640) 00:16:20.293 fused_ordering(641) 00:16:20.293 fused_ordering(642) 00:16:20.293 fused_ordering(643) 00:16:20.293 fused_ordering(644) 00:16:20.293 fused_ordering(645) 00:16:20.293 fused_ordering(646) 00:16:20.293 fused_ordering(647) 00:16:20.293 fused_ordering(648) 00:16:20.293 fused_ordering(649) 00:16:20.293 fused_ordering(650) 00:16:20.293 fused_ordering(651) 00:16:20.293 fused_ordering(652) 00:16:20.293 fused_ordering(653) 00:16:20.293 fused_ordering(654) 00:16:20.293 fused_ordering(655) 00:16:20.293 fused_ordering(656) 00:16:20.293 fused_ordering(657) 00:16:20.293 fused_ordering(658) 00:16:20.293 fused_ordering(659) 00:16:20.293 fused_ordering(660) 00:16:20.293 fused_ordering(661) 00:16:20.293 fused_ordering(662) 00:16:20.293 fused_ordering(663) 00:16:20.293 fused_ordering(664) 00:16:20.293 fused_ordering(665) 00:16:20.293 fused_ordering(666) 00:16:20.293 fused_ordering(667) 00:16:20.293 fused_ordering(668) 00:16:20.293 fused_ordering(669) 00:16:20.293 fused_ordering(670) 00:16:20.293 fused_ordering(671) 00:16:20.293 fused_ordering(672) 00:16:20.293 fused_ordering(673) 00:16:20.293 fused_ordering(674) 00:16:20.293 fused_ordering(675) 00:16:20.293 fused_ordering(676) 00:16:20.293 fused_ordering(677) 00:16:20.293 fused_ordering(678) 00:16:20.293 fused_ordering(679) 00:16:20.293 fused_ordering(680) 00:16:20.294 fused_ordering(681) 00:16:20.294 fused_ordering(682) 00:16:20.294 fused_ordering(683) 00:16:20.294 fused_ordering(684) 00:16:20.294 fused_ordering(685) 00:16:20.294 fused_ordering(686) 00:16:20.294 fused_ordering(687) 00:16:20.294 fused_ordering(688) 00:16:20.294 fused_ordering(689) 00:16:20.294 fused_ordering(690) 00:16:20.294 fused_ordering(691) 00:16:20.294 fused_ordering(692) 00:16:20.294 fused_ordering(693) 00:16:20.294 fused_ordering(694) 00:16:20.294 fused_ordering(695) 00:16:20.294 fused_ordering(696) 00:16:20.294 fused_ordering(697) 00:16:20.294 fused_ordering(698) 00:16:20.294 fused_ordering(699) 00:16:20.294 fused_ordering(700) 00:16:20.294 fused_ordering(701) 00:16:20.294 fused_ordering(702) 00:16:20.294 fused_ordering(703) 00:16:20.294 fused_ordering(704) 00:16:20.294 fused_ordering(705) 00:16:20.294 fused_ordering(706) 00:16:20.294 fused_ordering(707) 00:16:20.294 fused_ordering(708) 00:16:20.294 fused_ordering(709) 00:16:20.294 fused_ordering(710) 00:16:20.294 fused_ordering(711) 00:16:20.294 fused_ordering(712) 00:16:20.294 fused_ordering(713) 00:16:20.294 fused_ordering(714) 00:16:20.294 fused_ordering(715) 00:16:20.294 fused_ordering(716) 00:16:20.294 fused_ordering(717) 00:16:20.294 fused_ordering(718) 00:16:20.294 fused_ordering(719) 00:16:20.294 fused_ordering(720) 00:16:20.294 fused_ordering(721) 00:16:20.294 fused_ordering(722) 00:16:20.294 fused_ordering(723) 00:16:20.294 fused_ordering(724) 00:16:20.294 fused_ordering(725) 00:16:20.294 fused_ordering(726) 00:16:20.294 fused_ordering(727) 00:16:20.294 fused_ordering(728) 00:16:20.294 fused_ordering(729) 00:16:20.294 fused_ordering(730) 00:16:20.294 fused_ordering(731) 00:16:20.294 fused_ordering(732) 00:16:20.294 fused_ordering(733) 00:16:20.294 fused_ordering(734) 00:16:20.294 fused_ordering(735) 00:16:20.294 fused_ordering(736) 00:16:20.294 fused_ordering(737) 00:16:20.294 fused_ordering(738) 00:16:20.294 fused_ordering(739) 00:16:20.294 fused_ordering(740) 00:16:20.294 fused_ordering(741) 00:16:20.294 fused_ordering(742) 00:16:20.294 fused_ordering(743) 00:16:20.294 fused_ordering(744) 00:16:20.294 fused_ordering(745) 00:16:20.294 fused_ordering(746) 00:16:20.294 fused_ordering(747) 00:16:20.294 fused_ordering(748) 00:16:20.294 fused_ordering(749) 00:16:20.294 fused_ordering(750) 00:16:20.294 fused_ordering(751) 00:16:20.294 fused_ordering(752) 00:16:20.294 fused_ordering(753) 00:16:20.294 fused_ordering(754) 00:16:20.294 fused_ordering(755) 00:16:20.294 fused_ordering(756) 00:16:20.294 fused_ordering(757) 00:16:20.294 fused_ordering(758) 00:16:20.294 fused_ordering(759) 00:16:20.294 fused_ordering(760) 00:16:20.294 fused_ordering(761) 00:16:20.294 fused_ordering(762) 00:16:20.294 fused_ordering(763) 00:16:20.294 fused_ordering(764) 00:16:20.294 fused_ordering(765) 00:16:20.294 fused_ordering(766) 00:16:20.294 fused_ordering(767) 00:16:20.294 fused_ordering(768) 00:16:20.294 fused_ordering(769) 00:16:20.294 fused_ordering(770) 00:16:20.294 fused_ordering(771) 00:16:20.294 fused_ordering(772) 00:16:20.294 fused_ordering(773) 00:16:20.294 fused_ordering(774) 00:16:20.294 fused_ordering(775) 00:16:20.294 fused_ordering(776) 00:16:20.294 fused_ordering(777) 00:16:20.294 fused_ordering(778) 00:16:20.294 fused_ordering(779) 00:16:20.294 fused_ordering(780) 00:16:20.294 fused_ordering(781) 00:16:20.294 fused_ordering(782) 00:16:20.294 fused_ordering(783) 00:16:20.294 fused_ordering(784) 00:16:20.294 fused_ordering(785) 00:16:20.294 fused_ordering(786) 00:16:20.294 fused_ordering(787) 00:16:20.294 fused_ordering(788) 00:16:20.294 fused_ordering(789) 00:16:20.294 fused_ordering(790) 00:16:20.294 fused_ordering(791) 00:16:20.294 fused_ordering(792) 00:16:20.294 fused_ordering(793) 00:16:20.294 fused_ordering(794) 00:16:20.294 fused_ordering(795) 00:16:20.294 fused_ordering(796) 00:16:20.294 fused_ordering(797) 00:16:20.294 fused_ordering(798) 00:16:20.294 fused_ordering(799) 00:16:20.294 fused_ordering(800) 00:16:20.294 fused_ordering(801) 00:16:20.294 fused_ordering(802) 00:16:20.294 fused_ordering(803) 00:16:20.294 fused_ordering(804) 00:16:20.294 fused_ordering(805) 00:16:20.294 fused_ordering(806) 00:16:20.294 fused_ordering(807) 00:16:20.294 fused_ordering(808) 00:16:20.294 fused_ordering(809) 00:16:20.294 fused_ordering(810) 00:16:20.294 fused_ordering(811) 00:16:20.294 fused_ordering(812) 00:16:20.294 fused_ordering(813) 00:16:20.294 fused_ordering(814) 00:16:20.294 fused_ordering(815) 00:16:20.294 fused_ordering(816) 00:16:20.294 fused_ordering(817) 00:16:20.294 fused_ordering(818) 00:16:20.294 fused_ordering(819) 00:16:20.294 fused_ordering(820) 00:16:20.858 fused_ordering(821) 00:16:20.858 fused_ordering(822) 00:16:20.858 fused_ordering(823) 00:16:20.858 fused_ordering(824) 00:16:20.858 fused_ordering(825) 00:16:20.858 fused_ordering(826) 00:16:20.858 fused_ordering(827) 00:16:20.858 fused_ordering(828) 00:16:20.858 fused_ordering(829) 00:16:20.858 fused_ordering(830) 00:16:20.858 fused_ordering(831) 00:16:20.858 fused_ordering(832) 00:16:20.858 fused_ordering(833) 00:16:20.858 fused_ordering(834) 00:16:20.858 fused_ordering(835) 00:16:20.858 fused_ordering(836) 00:16:20.858 fused_ordering(837) 00:16:20.858 fused_ordering(838) 00:16:20.858 fused_ordering(839) 00:16:20.858 fused_ordering(840) 00:16:20.858 fused_ordering(841) 00:16:20.858 fused_ordering(842) 00:16:20.858 fused_ordering(843) 00:16:20.858 fused_ordering(844) 00:16:20.858 fused_ordering(845) 00:16:20.858 fused_ordering(846) 00:16:20.858 fused_ordering(847) 00:16:20.858 fused_ordering(848) 00:16:20.858 fused_ordering(849) 00:16:20.858 fused_ordering(850) 00:16:20.858 fused_ordering(851) 00:16:20.858 fused_ordering(852) 00:16:20.858 fused_ordering(853) 00:16:20.858 fused_ordering(854) 00:16:20.858 fused_ordering(855) 00:16:20.858 fused_ordering(856) 00:16:20.858 fused_ordering(857) 00:16:20.858 fused_ordering(858) 00:16:20.858 fused_ordering(859) 00:16:20.858 fused_ordering(860) 00:16:20.858 fused_ordering(861) 00:16:20.858 fused_ordering(862) 00:16:20.858 fused_ordering(863) 00:16:20.858 fused_ordering(864) 00:16:20.858 fused_ordering(865) 00:16:20.858 fused_ordering(866) 00:16:20.858 fused_ordering(867) 00:16:20.858 fused_ordering(868) 00:16:20.858 fused_ordering(869) 00:16:20.858 fused_ordering(870) 00:16:20.858 fused_ordering(871) 00:16:20.858 fused_ordering(872) 00:16:20.858 fused_ordering(873) 00:16:20.858 fused_ordering(874) 00:16:20.858 fused_ordering(875) 00:16:20.858 fused_ordering(876) 00:16:20.858 fused_ordering(877) 00:16:20.858 fused_ordering(878) 00:16:20.858 fused_ordering(879) 00:16:20.858 fused_ordering(880) 00:16:20.858 fused_ordering(881) 00:16:20.858 fused_ordering(882) 00:16:20.858 fused_ordering(883) 00:16:20.858 fused_ordering(884) 00:16:20.858 fused_ordering(885) 00:16:20.858 fused_ordering(886) 00:16:20.858 fused_ordering(887) 00:16:20.858 fused_ordering(888) 00:16:20.858 fused_ordering(889) 00:16:20.858 fused_ordering(890) 00:16:20.858 fused_ordering(891) 00:16:20.858 fused_ordering(892) 00:16:20.858 fused_ordering(893) 00:16:20.858 fused_ordering(894) 00:16:20.858 fused_ordering(895) 00:16:20.858 fused_ordering(896) 00:16:20.858 fused_ordering(897) 00:16:20.858 fused_ordering(898) 00:16:20.858 fused_ordering(899) 00:16:20.858 fused_ordering(900) 00:16:20.858 fused_ordering(901) 00:16:20.858 fused_ordering(902) 00:16:20.858 fused_ordering(903) 00:16:20.858 fused_ordering(904) 00:16:20.858 fused_ordering(905) 00:16:20.858 fused_ordering(906) 00:16:20.858 fused_ordering(907) 00:16:20.858 fused_ordering(908) 00:16:20.858 fused_ordering(909) 00:16:20.858 fused_ordering(910) 00:16:20.858 fused_ordering(911) 00:16:20.858 fused_ordering(912) 00:16:20.858 fused_ordering(913) 00:16:20.858 fused_ordering(914) 00:16:20.858 fused_ordering(915) 00:16:20.858 fused_ordering(916) 00:16:20.858 fused_ordering(917) 00:16:20.858 fused_ordering(918) 00:16:20.858 fused_ordering(919) 00:16:20.858 fused_ordering(920) 00:16:20.858 fused_ordering(921) 00:16:20.858 fused_ordering(922) 00:16:20.858 fused_ordering(923) 00:16:20.858 fused_ordering(924) 00:16:20.858 fused_ordering(925) 00:16:20.858 fused_ordering(926) 00:16:20.858 fused_ordering(927) 00:16:20.858 fused_ordering(928) 00:16:20.858 fused_ordering(929) 00:16:20.858 fused_ordering(930) 00:16:20.858 fused_ordering(931) 00:16:20.858 fused_ordering(932) 00:16:20.858 fused_ordering(933) 00:16:20.858 fused_ordering(934) 00:16:20.858 fused_ordering(935) 00:16:20.858 fused_ordering(936) 00:16:20.858 fused_ordering(937) 00:16:20.858 fused_ordering(938) 00:16:20.858 fused_ordering(939) 00:16:20.858 fused_ordering(940) 00:16:20.858 fused_ordering(941) 00:16:20.858 fused_ordering(942) 00:16:20.858 fused_ordering(943) 00:16:20.858 fused_ordering(944) 00:16:20.858 fused_ordering(945) 00:16:20.858 fused_ordering(946) 00:16:20.858 fused_ordering(947) 00:16:20.858 fused_ordering(948) 00:16:20.858 fused_ordering(949) 00:16:20.858 fused_ordering(950) 00:16:20.858 fused_ordering(951) 00:16:20.858 fused_ordering(952) 00:16:20.858 fused_ordering(953) 00:16:20.858 fused_ordering(954) 00:16:20.858 fused_ordering(955) 00:16:20.858 fused_ordering(956) 00:16:20.858 fused_ordering(957) 00:16:20.858 fused_ordering(958) 00:16:20.858 fused_ordering(959) 00:16:20.858 fused_ordering(960) 00:16:20.858 fused_ordering(961) 00:16:20.858 fused_ordering(962) 00:16:20.858 fused_ordering(963) 00:16:20.858 fused_ordering(964) 00:16:20.858 fused_ordering(965) 00:16:20.858 fused_ordering(966) 00:16:20.858 fused_ordering(967) 00:16:20.858 fused_ordering(968) 00:16:20.858 fused_ordering(969) 00:16:20.858 fused_ordering(970) 00:16:20.858 fused_ordering(971) 00:16:20.858 fused_ordering(972) 00:16:20.858 fused_ordering(973) 00:16:20.858 fused_ordering(974) 00:16:20.858 fused_ordering(975) 00:16:20.858 fused_ordering(976) 00:16:20.858 fused_ordering(977) 00:16:20.858 fused_ordering(978) 00:16:20.858 fused_ordering(979) 00:16:20.858 fused_ordering(980) 00:16:20.858 fused_ordering(981) 00:16:20.858 fused_ordering(982) 00:16:20.858 fused_ordering(983) 00:16:20.858 fused_ordering(984) 00:16:20.858 fused_ordering(985) 00:16:20.858 fused_ordering(986) 00:16:20.858 fused_ordering(987) 00:16:20.858 fused_ordering(988) 00:16:20.858 fused_ordering(989) 00:16:20.858 fused_ordering(990) 00:16:20.858 fused_ordering(991) 00:16:20.858 fused_ordering(992) 00:16:20.858 fused_ordering(993) 00:16:20.858 fused_ordering(994) 00:16:20.858 fused_ordering(995) 00:16:20.859 fused_ordering(996) 00:16:20.859 fused_ordering(997) 00:16:20.859 fused_ordering(998) 00:16:20.859 fused_ordering(999) 00:16:20.859 fused_ordering(1000) 00:16:20.859 fused_ordering(1001) 00:16:20.859 fused_ordering(1002) 00:16:20.859 fused_ordering(1003) 00:16:20.859 fused_ordering(1004) 00:16:20.859 fused_ordering(1005) 00:16:20.859 fused_ordering(1006) 00:16:20.859 fused_ordering(1007) 00:16:20.859 fused_ordering(1008) 00:16:20.859 fused_ordering(1009) 00:16:20.859 fused_ordering(1010) 00:16:20.859 fused_ordering(1011) 00:16:20.859 fused_ordering(1012) 00:16:20.859 fused_ordering(1013) 00:16:20.859 fused_ordering(1014) 00:16:20.859 fused_ordering(1015) 00:16:20.859 fused_ordering(1016) 00:16:20.859 fused_ordering(1017) 00:16:20.859 fused_ordering(1018) 00:16:20.859 fused_ordering(1019) 00:16:20.859 fused_ordering(1020) 00:16:20.859 fused_ordering(1021) 00:16:20.859 fused_ordering(1022) 00:16:20.859 fused_ordering(1023) 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.859 rmmod nvme_tcp 00:16:20.859 rmmod nvme_fabrics 00:16:20.859 rmmod nvme_keyring 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2610544 ']' 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2610544 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2610544 ']' 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2610544 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.859 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610544 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610544' 00:16:21.117 killing process with pid 2610544 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2610544 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2610544 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.117 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:23.648 00:16:23.648 real 0m10.615s 00:16:23.648 user 0m4.950s 00:16:23.648 sys 0m5.760s 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.648 ************************************ 00:16:23.648 END TEST nvmf_fused_ordering 00:16:23.648 ************************************ 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.648 ************************************ 00:16:23.648 START TEST nvmf_ns_masking 00:16:23.648 ************************************ 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:23.648 * Looking for test storage... 00:16:23.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:16:23.648 10:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.648 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.649 --rc genhtml_branch_coverage=1 00:16:23.649 --rc genhtml_function_coverage=1 00:16:23.649 --rc genhtml_legend=1 00:16:23.649 --rc geninfo_all_blocks=1 00:16:23.649 --rc geninfo_unexecuted_blocks=1 00:16:23.649 00:16:23.649 ' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.649 --rc genhtml_branch_coverage=1 00:16:23.649 --rc genhtml_function_coverage=1 00:16:23.649 --rc genhtml_legend=1 00:16:23.649 --rc geninfo_all_blocks=1 00:16:23.649 --rc geninfo_unexecuted_blocks=1 00:16:23.649 00:16:23.649 ' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.649 --rc genhtml_branch_coverage=1 00:16:23.649 --rc genhtml_function_coverage=1 00:16:23.649 --rc genhtml_legend=1 00:16:23.649 --rc geninfo_all_blocks=1 00:16:23.649 --rc geninfo_unexecuted_blocks=1 00:16:23.649 00:16:23.649 ' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.649 --rc genhtml_branch_coverage=1 00:16:23.649 --rc genhtml_function_coverage=1 00:16:23.649 --rc genhtml_legend=1 00:16:23.649 --rc geninfo_all_blocks=1 00:16:23.649 --rc geninfo_unexecuted_blocks=1 00:16:23.649 00:16:23.649 ' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9efaeaf6-3df4-4bc8-9219-93f10ac935a7 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c956571e-7504-46b9-a550-bdc070f24204 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5bbcaaa4-a5cd-4027-b92e-6a99230a5909 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:23.649 10:27:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.219 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:30.220 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:30.220 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:30.220 Found net devices under 0000:86:00.0: cvl_0_0 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:30.220 Found net devices under 0000:86:00.1: cvl_0_1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.220 10:27:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:16:30.220 00:16:30.220 --- 10.0.0.2 ping statistics --- 00:16:30.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.220 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:16:30.220 00:16:30.220 --- 10.0.0.1 ping statistics --- 00:16:30.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.220 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2614669 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2614669 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2614669 ']' 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.220 [2024-12-09 10:27:07.163770] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:30.220 [2024-12-09 10:27:07.163829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.220 [2024-12-09 10:27:07.245409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.220 [2024-12-09 10:27:07.285594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.220 [2024-12-09 10:27:07.285627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.220 [2024-12-09 10:27:07.285634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.220 [2024-12-09 10:27:07.285640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.220 [2024-12-09 10:27:07.285646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.220 [2024-12-09 10:27:07.286216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.220 [2024-12-09 10:27:07.582680] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:30.220 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:30.220 Malloc1 00:16:30.221 10:27:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:30.479 Malloc2 00:16:30.479 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.737 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:30.737 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.994 [2024-12-09 10:27:08.610842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.994 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:30.994 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5bbcaaa4-a5cd-4027-b92e-6a99230a5909 -a 10.0.0.2 -s 4420 -i 4 00:16:31.252 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.252 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:31.252 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.252 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:31.252 10:27:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.149 [ 0]:0x1 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.149 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.406 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df4914a447904e93bccc92850fecdf33 00:16:33.406 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df4914a447904e93bccc92850fecdf33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.406 10:27:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:33.406 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:33.406 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.406 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.406 [ 0]:0x1 00:16:33.406 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.406 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.663 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df4914a447904e93bccc92850fecdf33 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df4914a447904e93bccc92850fecdf33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.664 [ 1]:0x2 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:33.664 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.921 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.921 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:34.179 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:34.179 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5bbcaaa4-a5cd-4027-b92e-6a99230a5909 -a 10.0.0.2 -s 4420 -i 4 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:34.437 10:27:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:36.339 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:36.598 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:36.598 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:36.598 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:36.599 [ 0]:0x2 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.599 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.857 [ 0]:0x1 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df4914a447904e93bccc92850fecdf33 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df4914a447904e93bccc92850fecdf33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:36.857 [ 1]:0x2 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:36.857 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:37.116 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.374 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:37.375 [ 0]:0x2 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:37.375 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.375 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5bbcaaa4-a5cd-4027-b92e-6a99230a5909 -a 10.0.0.2 -s 4420 -i 4 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:37.632 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.157 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:40.157 [ 0]:0x1 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df4914a447904e93bccc92850fecdf33 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df4914a447904e93bccc92850fecdf33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:40.158 [ 1]:0x2 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.158 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:40.416 [ 0]:0x2 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:40.416 10:27:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:40.416 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:40.675 [2024-12-09 10:27:18.206184] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:40.675 request: 00:16:40.675 { 00:16:40.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.675 "nsid": 2, 00:16:40.675 "host": "nqn.2016-06.io.spdk:host1", 00:16:40.675 "method": "nvmf_ns_remove_host", 00:16:40.675 "req_id": 1 00:16:40.675 } 00:16:40.675 Got JSON-RPC error response 00:16:40.675 response: 00:16:40.675 { 00:16:40.675 "code": -32602, 00:16:40.675 "message": "Invalid parameters" 00:16:40.675 } 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.675 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:40.676 [ 0]:0x2 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3cba542109e4c6796170bdc452d6f68 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3cba542109e4c6796170bdc452d6f68 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2617055 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2617055 /var/tmp/host.sock 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2617055 ']' 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:40.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.676 10:27:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:40.934 [2024-12-09 10:27:18.423558] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:40.934 [2024-12-09 10:27:18.423602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617055 ] 00:16:40.934 [2024-12-09 10:27:18.499015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.934 [2024-12-09 10:27:18.538865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.866 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.866 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:41.866 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.866 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:42.123 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9efaeaf6-3df4-4bc8-9219-93f10ac935a7 00:16:42.123 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:42.123 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EFAEAF63DF44BC8921993F10AC935A7 -i 00:16:42.380 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c956571e-7504-46b9-a550-bdc070f24204 00:16:42.380 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:42.380 10:27:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C956571E750446B9A550BDC070F24204 -i 00:16:42.380 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:42.636 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:42.893 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:42.893 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:43.150 nvme0n1 00:16:43.150 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:43.150 10:27:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:43.406 nvme1n2 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:43.662 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:43.920 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9efaeaf6-3df4-4bc8-9219-93f10ac935a7 == \9\e\f\a\e\a\f\6\-\3\d\f\4\-\4\b\c\8\-\9\2\1\9\-\9\3\f\1\0\a\c\9\3\5\a\7 ]] 00:16:43.920 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:43.920 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:43.921 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:44.179 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c956571e-7504-46b9-a550-bdc070f24204 == \c\9\5\6\5\7\1\e\-\7\5\0\4\-\4\6\b\9\-\a\5\5\0\-\b\d\c\0\7\0\f\2\4\2\0\4 ]] 00:16:44.179 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.438 10:27:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9efaeaf6-3df4-4bc8-9219-93f10ac935a7 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFAEAF63DF44BC8921993F10AC935A7 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFAEAF63DF44BC8921993F10AC935A7 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.438 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFAEAF63DF44BC8921993F10AC935A7 00:16:44.700 [2024-12-09 10:27:22.305409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:44.700 [2024-12-09 10:27:22.305442] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:44.700 [2024-12-09 10:27:22.305450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.700 request: 00:16:44.700 { 00:16:44.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.700 "namespace": { 00:16:44.700 "bdev_name": "invalid", 00:16:44.700 "nsid": 1, 00:16:44.700 "nguid": "9EFAEAF63DF44BC8921993F10AC935A7", 00:16:44.700 "no_auto_visible": false, 00:16:44.700 "hide_metadata": false 00:16:44.700 }, 00:16:44.700 "method": "nvmf_subsystem_add_ns", 00:16:44.700 "req_id": 1 00:16:44.700 } 00:16:44.700 Got JSON-RPC error response 00:16:44.700 response: 00:16:44.700 { 00:16:44.700 "code": -32602, 00:16:44.700 "message": "Invalid parameters" 00:16:44.700 } 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9efaeaf6-3df4-4bc8-9219-93f10ac935a7 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:44.700 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EFAEAF63DF44BC8921993F10AC935A7 -i 00:16:45.032 10:27:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2617055 00:16:46.984 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2617055 ']' 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2617055 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2617055 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2617055' 00:16:47.243 killing process with pid 2617055 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2617055 00:16:47.243 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2617055 00:16:47.506 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.771 rmmod nvme_tcp 00:16:47.771 rmmod nvme_fabrics 00:16:47.771 rmmod nvme_keyring 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2614669 ']' 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2614669 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2614669 ']' 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2614669 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614669 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614669' 00:16:47.771 killing process with pid 2614669 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2614669 00:16:47.771 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2614669 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.030 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.597 00:16:50.597 real 0m26.781s 00:16:50.597 user 0m32.489s 00:16:50.597 sys 0m7.127s 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.597 ************************************ 00:16:50.597 END TEST nvmf_ns_masking 00:16:50.597 ************************************ 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.597 ************************************ 00:16:50.597 START TEST nvmf_nvme_cli 00:16:50.597 ************************************ 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:50.597 * Looking for test storage... 00:16:50.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.597 --rc genhtml_branch_coverage=1 00:16:50.597 --rc genhtml_function_coverage=1 00:16:50.597 --rc genhtml_legend=1 00:16:50.597 --rc geninfo_all_blocks=1 00:16:50.597 --rc geninfo_unexecuted_blocks=1 00:16:50.597 00:16:50.597 ' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.597 --rc genhtml_branch_coverage=1 00:16:50.597 --rc genhtml_function_coverage=1 00:16:50.597 --rc genhtml_legend=1 00:16:50.597 --rc geninfo_all_blocks=1 00:16:50.597 --rc geninfo_unexecuted_blocks=1 00:16:50.597 00:16:50.597 ' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:50.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.597 --rc genhtml_branch_coverage=1 00:16:50.597 --rc genhtml_function_coverage=1 00:16:50.597 --rc genhtml_legend=1 00:16:50.597 --rc geninfo_all_blocks=1 00:16:50.597 --rc geninfo_unexecuted_blocks=1 00:16:50.597 00:16:50.597 ' 00:16:50.597 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.598 --rc genhtml_branch_coverage=1 00:16:50.598 --rc genhtml_function_coverage=1 00:16:50.598 --rc genhtml_legend=1 00:16:50.598 --rc geninfo_all_blocks=1 00:16:50.598 --rc geninfo_unexecuted_blocks=1 00:16:50.598 00:16:50.598 ' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.598 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:57.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:57.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:57.164 Found net devices under 0000:86:00.0: cvl_0_0 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:57.164 Found net devices under 0000:86:00.1: cvl_0_1 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.164 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:57.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:16:57.165 00:16:57.165 --- 10.0.0.2 ping statistics --- 00:16:57.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.165 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:16:57.165 00:16:57.165 --- 10.0.0.1 ping statistics --- 00:16:57.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.165 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2621780 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2621780 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2621780 ']' 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.165 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 [2024-12-09 10:27:33.955752] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:57.165 [2024-12-09 10:27:33.955799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.165 [2024-12-09 10:27:34.036162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.165 [2024-12-09 10:27:34.079104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.165 [2024-12-09 10:27:34.079140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.165 [2024-12-09 10:27:34.079147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.165 [2024-12-09 10:27:34.079153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.165 [2024-12-09 10:27:34.079158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.165 [2024-12-09 10:27:34.080684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.165 [2024-12-09 10:27:34.080791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.165 [2024-12-09 10:27:34.080898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.165 [2024-12-09 10:27:34.080899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 [2024-12-09 10:27:34.833696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 Malloc0 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.165 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 Malloc1 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 [2024-12-09 10:27:34.931123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.423 10:27:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:57.423 00:16:57.423 Discovery Log Number of Records 2, Generation counter 2 00:16:57.423 =====Discovery Log Entry 0====== 00:16:57.423 trtype: tcp 00:16:57.423 adrfam: ipv4 00:16:57.423 subtype: current discovery subsystem 00:16:57.423 treq: not required 00:16:57.423 portid: 0 00:16:57.423 trsvcid: 4420 00:16:57.423 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:57.423 traddr: 10.0.0.2 00:16:57.423 eflags: explicit discovery connections, duplicate discovery information 00:16:57.423 sectype: none 00:16:57.423 =====Discovery Log Entry 1====== 00:16:57.423 trtype: tcp 00:16:57.423 adrfam: ipv4 00:16:57.423 subtype: nvme subsystem 00:16:57.423 treq: not required 00:16:57.423 portid: 0 00:16:57.423 trsvcid: 4420 00:16:57.423 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:57.423 traddr: 10.0.0.2 00:16:57.423 eflags: none 00:16:57.423 sectype: none 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:57.423 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.796 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:58.796 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.796 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.796 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:58.796 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:58.797 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:00.690 /dev/nvme0n2 ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.690 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.947 rmmod nvme_tcp 00:17:00.947 rmmod nvme_fabrics 00:17:00.947 rmmod nvme_keyring 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2621780 ']' 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2621780 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2621780 ']' 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2621780 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621780 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621780' 00:17:00.947 killing process with pid 2621780 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2621780 00:17:00.947 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2621780 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.206 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.108 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.108 00:17:03.108 real 0m13.069s 00:17:03.108 user 0m20.483s 00:17:03.108 sys 0m5.084s 00:17:03.108 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.108 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.108 ************************************ 00:17:03.108 END TEST nvmf_nvme_cli 00:17:03.108 ************************************ 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.368 ************************************ 00:17:03.368 START TEST nvmf_vfio_user 00:17:03.368 ************************************ 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:03.368 * Looking for test storage... 00:17:03.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:03.368 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:03.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.368 --rc genhtml_branch_coverage=1 00:17:03.368 --rc genhtml_function_coverage=1 00:17:03.368 --rc genhtml_legend=1 00:17:03.368 --rc geninfo_all_blocks=1 00:17:03.368 --rc geninfo_unexecuted_blocks=1 00:17:03.368 00:17:03.368 ' 00:17:03.368 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:03.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.368 --rc genhtml_branch_coverage=1 00:17:03.369 --rc genhtml_function_coverage=1 00:17:03.369 --rc genhtml_legend=1 00:17:03.369 --rc geninfo_all_blocks=1 00:17:03.369 --rc geninfo_unexecuted_blocks=1 00:17:03.369 00:17:03.369 ' 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:03.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.369 --rc genhtml_branch_coverage=1 00:17:03.369 --rc genhtml_function_coverage=1 00:17:03.369 --rc genhtml_legend=1 00:17:03.369 --rc geninfo_all_blocks=1 00:17:03.369 --rc geninfo_unexecuted_blocks=1 00:17:03.369 00:17:03.369 ' 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:03.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.369 --rc genhtml_branch_coverage=1 00:17:03.369 --rc genhtml_function_coverage=1 00:17:03.369 --rc genhtml_legend=1 00:17:03.369 --rc geninfo_all_blocks=1 00:17:03.369 --rc geninfo_unexecuted_blocks=1 00:17:03.369 00:17:03.369 ' 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.369 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2623073 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2623073' 00:17:03.628 Process pid: 2623073 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2623073 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2623073 ']' 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.628 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 [2024-12-09 10:27:41.158303] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:03.628 [2024-12-09 10:27:41.158349] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.628 [2024-12-09 10:27:41.232519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.629 [2024-12-09 10:27:41.274302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.629 [2024-12-09 10:27:41.274337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.629 [2024-12-09 10:27:41.274344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.629 [2024-12-09 10:27:41.274350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.629 [2024-12-09 10:27:41.274356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.629 [2024-12-09 10:27:41.275793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.629 [2024-12-09 10:27:41.275906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.629 [2024-12-09 10:27:41.275938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.629 [2024-12-09 10:27:41.275939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.886 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.886 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:03.886 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:04.817 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:05.074 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:05.074 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:05.074 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:05.074 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:05.074 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:05.074 Malloc1 00:17:05.331 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:05.331 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:05.587 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:05.844 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:05.844 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:05.844 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:06.102 Malloc2 00:17:06.102 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:06.102 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:06.358 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:06.618 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:06.618 [2024-12-09 10:27:44.240393] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:06.618 [2024-12-09 10:27:44.240426] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623558 ] 00:17:06.618 [2024-12-09 10:27:44.279266] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:06.618 [2024-12-09 10:27:44.284557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:06.618 [2024-12-09 10:27:44.284577] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbfb2276000 00:17:06.618 [2024-12-09 10:27:44.285555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.286556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.287559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.288574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.289571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.290575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.291582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.292594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:06.618 [2024-12-09 10:27:44.293598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:06.618 [2024-12-09 10:27:44.293607] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbfb226b000 00:17:06.618 [2024-12-09 10:27:44.294522] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:06.618 [2024-12-09 10:27:44.304010] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:06.618 [2024-12-09 10:27:44.304038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:06.618 [2024-12-09 10:27:44.313717] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:06.618 [2024-12-09 10:27:44.313753] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:06.618 [2024-12-09 10:27:44.313823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:06.618 [2024-12-09 10:27:44.313837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:06.618 [2024-12-09 10:27:44.313842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:06.618 [2024-12-09 10:27:44.314708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:06.618 [2024-12-09 10:27:44.314719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:06.618 [2024-12-09 10:27:44.314725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:06.618 [2024-12-09 10:27:44.315712] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:06.618 [2024-12-09 10:27:44.315721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:06.618 [2024-12-09 10:27:44.315727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.316718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:06.618 [2024-12-09 10:27:44.316727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.317720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:06.618 [2024-12-09 10:27:44.317730] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:06.618 [2024-12-09 10:27:44.317735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.317741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.317845] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:06.618 [2024-12-09 10:27:44.317850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.317854] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:06.618 [2024-12-09 10:27:44.318732] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:06.618 [2024-12-09 10:27:44.319738] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:06.618 [2024-12-09 10:27:44.320746] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:06.618 [2024-12-09 10:27:44.321749] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:06.618 [2024-12-09 10:27:44.321816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:06.618 [2024-12-09 10:27:44.322761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:06.618 [2024-12-09 10:27:44.322768] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:06.618 [2024-12-09 10:27:44.322772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:06.618 [2024-12-09 10:27:44.322789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:06.618 [2024-12-09 10:27:44.322796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:06.618 [2024-12-09 10:27:44.322816] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.618 [2024-12-09 10:27:44.322821] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.618 [2024-12-09 10:27:44.322824] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.618 [2024-12-09 10:27:44.322835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.618 [2024-12-09 10:27:44.322880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:06.618 [2024-12-09 10:27:44.322891] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:06.618 [2024-12-09 10:27:44.322895] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:06.618 [2024-12-09 10:27:44.322899] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:06.618 [2024-12-09 10:27:44.322903] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:06.618 [2024-12-09 10:27:44.322907] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:06.618 [2024-12-09 10:27:44.322913] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:06.618 [2024-12-09 10:27:44.322917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.322924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.322933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.322947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.322957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.619 [2024-12-09 10:27:44.322965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.619 [2024-12-09 10:27:44.322972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.619 [2024-12-09 10:27:44.322979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.619 [2024-12-09 10:27:44.322983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.322991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.322999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323012] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:06.619 [2024-12-09 10:27:44.323016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323111] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:06.619 [2024-12-09 10:27:44.323115] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:06.619 [2024-12-09 10:27:44.323118] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.619 [2024-12-09 10:27:44.323124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323148] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:06.619 [2024-12-09 10:27:44.323159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323171] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.619 [2024-12-09 10:27:44.323175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.619 [2024-12-09 10:27:44.323178] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.619 [2024-12-09 10:27:44.323184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323230] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:06.619 [2024-12-09 10:27:44.323234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.619 [2024-12-09 10:27:44.323236] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.619 [2024-12-09 10:27:44.323242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323295] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:06.619 [2024-12-09 10:27:44.323299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:06.619 [2024-12-09 10:27:44.323304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:06.619 [2024-12-09 10:27:44.323321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:06.619 [2024-12-09 10:27:44.323402] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:06.619 [2024-12-09 10:27:44.323406] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:06.619 [2024-12-09 10:27:44.323409] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:06.619 [2024-12-09 10:27:44.323412] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:06.619 [2024-12-09 10:27:44.323415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:06.619 [2024-12-09 10:27:44.323420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:06.619 [2024-12-09 10:27:44.323426] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:06.619 [2024-12-09 10:27:44.323430] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:06.619 [2024-12-09 10:27:44.323433] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.619 [2024-12-09 10:27:44.323438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:06.619 [2024-12-09 10:27:44.323444] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:06.620 [2024-12-09 10:27:44.323448] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:06.620 [2024-12-09 10:27:44.323451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.620 [2024-12-09 10:27:44.323456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:06.620 [2024-12-09 10:27:44.323462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:06.620 [2024-12-09 10:27:44.323466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:06.620 [2024-12-09 10:27:44.323469] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:06.620 [2024-12-09 10:27:44.323474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:06.620 [2024-12-09 10:27:44.323480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:06.620 [2024-12-09 10:27:44.323491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:06.620 [2024-12-09 10:27:44.323502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:06.620 [2024-12-09 10:27:44.323508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:06.620 ===================================================== 00:17:06.620 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:06.620 ===================================================== 00:17:06.620 Controller Capabilities/Features 00:17:06.620 ================================ 00:17:06.620 Vendor ID: 4e58 00:17:06.620 Subsystem Vendor ID: 4e58 00:17:06.620 Serial Number: SPDK1 00:17:06.620 Model Number: SPDK bdev Controller 00:17:06.620 Firmware Version: 25.01 00:17:06.620 Recommended Arb Burst: 6 00:17:06.620 IEEE OUI Identifier: 8d 6b 50 00:17:06.620 Multi-path I/O 00:17:06.620 May have multiple subsystem ports: Yes 00:17:06.620 May have multiple controllers: Yes 00:17:06.620 Associated with SR-IOV VF: No 00:17:06.620 Max Data Transfer Size: 131072 00:17:06.620 Max Number of Namespaces: 32 00:17:06.620 Max Number of I/O Queues: 127 00:17:06.620 NVMe Specification Version (VS): 1.3 00:17:06.620 NVMe Specification Version (Identify): 1.3 00:17:06.620 Maximum Queue Entries: 256 00:17:06.620 Contiguous Queues Required: Yes 00:17:06.620 Arbitration Mechanisms Supported 00:17:06.620 Weighted Round Robin: Not Supported 00:17:06.620 Vendor Specific: Not Supported 00:17:06.620 Reset Timeout: 15000 ms 00:17:06.620 Doorbell Stride: 4 bytes 00:17:06.620 NVM Subsystem Reset: Not Supported 00:17:06.620 Command Sets Supported 00:17:06.620 NVM Command Set: Supported 00:17:06.620 Boot Partition: Not Supported 00:17:06.620 Memory Page Size Minimum: 4096 bytes 00:17:06.620 Memory Page Size Maximum: 4096 bytes 00:17:06.620 Persistent Memory Region: Not Supported 00:17:06.620 Optional Asynchronous Events Supported 00:17:06.620 Namespace Attribute Notices: Supported 00:17:06.620 Firmware Activation Notices: Not Supported 00:17:06.620 ANA Change Notices: Not Supported 00:17:06.620 PLE Aggregate Log Change Notices: Not Supported 00:17:06.620 LBA Status Info Alert Notices: Not Supported 00:17:06.620 EGE Aggregate Log Change Notices: Not Supported 00:17:06.620 Normal NVM Subsystem Shutdown event: Not Supported 00:17:06.620 Zone Descriptor Change Notices: Not Supported 00:17:06.620 Discovery Log Change Notices: Not Supported 00:17:06.620 Controller Attributes 00:17:06.620 128-bit Host Identifier: Supported 00:17:06.620 Non-Operational Permissive Mode: Not Supported 00:17:06.620 NVM Sets: Not Supported 00:17:06.620 Read Recovery Levels: Not Supported 00:17:06.620 Endurance Groups: Not Supported 00:17:06.620 Predictable Latency Mode: Not Supported 00:17:06.620 Traffic Based Keep ALive: Not Supported 00:17:06.620 Namespace Granularity: Not Supported 00:17:06.620 SQ Associations: Not Supported 00:17:06.620 UUID List: Not Supported 00:17:06.620 Multi-Domain Subsystem: Not Supported 00:17:06.620 Fixed Capacity Management: Not Supported 00:17:06.620 Variable Capacity Management: Not Supported 00:17:06.620 Delete Endurance Group: Not Supported 00:17:06.620 Delete NVM Set: Not Supported 00:17:06.620 Extended LBA Formats Supported: Not Supported 00:17:06.620 Flexible Data Placement Supported: Not Supported 00:17:06.620 00:17:06.620 Controller Memory Buffer Support 00:17:06.620 ================================ 00:17:06.620 Supported: No 00:17:06.620 00:17:06.620 Persistent Memory Region Support 00:17:06.620 ================================ 00:17:06.620 Supported: No 00:17:06.620 00:17:06.620 Admin Command Set Attributes 00:17:06.620 ============================ 00:17:06.620 Security Send/Receive: Not Supported 00:17:06.620 Format NVM: Not Supported 00:17:06.620 Firmware Activate/Download: Not Supported 00:17:06.620 Namespace Management: Not Supported 00:17:06.620 Device Self-Test: Not Supported 00:17:06.620 Directives: Not Supported 00:17:06.620 NVMe-MI: Not Supported 00:17:06.620 Virtualization Management: Not Supported 00:17:06.620 Doorbell Buffer Config: Not Supported 00:17:06.620 Get LBA Status Capability: Not Supported 00:17:06.620 Command & Feature Lockdown Capability: Not Supported 00:17:06.620 Abort Command Limit: 4 00:17:06.620 Async Event Request Limit: 4 00:17:06.620 Number of Firmware Slots: N/A 00:17:06.620 Firmware Slot 1 Read-Only: N/A 00:17:06.620 Firmware Activation Without Reset: N/A 00:17:06.620 Multiple Update Detection Support: N/A 00:17:06.620 Firmware Update Granularity: No Information Provided 00:17:06.620 Per-Namespace SMART Log: No 00:17:06.620 Asymmetric Namespace Access Log Page: Not Supported 00:17:06.620 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:06.620 Command Effects Log Page: Supported 00:17:06.620 Get Log Page Extended Data: Supported 00:17:06.620 Telemetry Log Pages: Not Supported 00:17:06.620 Persistent Event Log Pages: Not Supported 00:17:06.620 Supported Log Pages Log Page: May Support 00:17:06.620 Commands Supported & Effects Log Page: Not Supported 00:17:06.620 Feature Identifiers & Effects Log Page:May Support 00:17:06.620 NVMe-MI Commands & Effects Log Page: May Support 00:17:06.620 Data Area 4 for Telemetry Log: Not Supported 00:17:06.620 Error Log Page Entries Supported: 128 00:17:06.620 Keep Alive: Supported 00:17:06.620 Keep Alive Granularity: 10000 ms 00:17:06.620 00:17:06.620 NVM Command Set Attributes 00:17:06.620 ========================== 00:17:06.620 Submission Queue Entry Size 00:17:06.620 Max: 64 00:17:06.620 Min: 64 00:17:06.620 Completion Queue Entry Size 00:17:06.620 Max: 16 00:17:06.620 Min: 16 00:17:06.620 Number of Namespaces: 32 00:17:06.620 Compare Command: Supported 00:17:06.620 Write Uncorrectable Command: Not Supported 00:17:06.620 Dataset Management Command: Supported 00:17:06.620 Write Zeroes Command: Supported 00:17:06.620 Set Features Save Field: Not Supported 00:17:06.620 Reservations: Not Supported 00:17:06.620 Timestamp: Not Supported 00:17:06.620 Copy: Supported 00:17:06.620 Volatile Write Cache: Present 00:17:06.620 Atomic Write Unit (Normal): 1 00:17:06.620 Atomic Write Unit (PFail): 1 00:17:06.620 Atomic Compare & Write Unit: 1 00:17:06.620 Fused Compare & Write: Supported 00:17:06.620 Scatter-Gather List 00:17:06.620 SGL Command Set: Supported (Dword aligned) 00:17:06.620 SGL Keyed: Not Supported 00:17:06.620 SGL Bit Bucket Descriptor: Not Supported 00:17:06.620 SGL Metadata Pointer: Not Supported 00:17:06.620 Oversized SGL: Not Supported 00:17:06.620 SGL Metadata Address: Not Supported 00:17:06.620 SGL Offset: Not Supported 00:17:06.620 Transport SGL Data Block: Not Supported 00:17:06.620 Replay Protected Memory Block: Not Supported 00:17:06.620 00:17:06.620 Firmware Slot Information 00:17:06.620 ========================= 00:17:06.620 Active slot: 1 00:17:06.620 Slot 1 Firmware Revision: 25.01 00:17:06.620 00:17:06.620 00:17:06.620 Commands Supported and Effects 00:17:06.620 ============================== 00:17:06.620 Admin Commands 00:17:06.620 -------------- 00:17:06.620 Get Log Page (02h): Supported 00:17:06.620 Identify (06h): Supported 00:17:06.620 Abort (08h): Supported 00:17:06.620 Set Features (09h): Supported 00:17:06.620 Get Features (0Ah): Supported 00:17:06.620 Asynchronous Event Request (0Ch): Supported 00:17:06.620 Keep Alive (18h): Supported 00:17:06.620 I/O Commands 00:17:06.620 ------------ 00:17:06.620 Flush (00h): Supported LBA-Change 00:17:06.620 Write (01h): Supported LBA-Change 00:17:06.620 Read (02h): Supported 00:17:06.620 Compare (05h): Supported 00:17:06.620 Write Zeroes (08h): Supported LBA-Change 00:17:06.620 Dataset Management (09h): Supported LBA-Change 00:17:06.621 Copy (19h): Supported LBA-Change 00:17:06.621 00:17:06.621 Error Log 00:17:06.621 ========= 00:17:06.621 00:17:06.621 Arbitration 00:17:06.621 =========== 00:17:06.621 Arbitration Burst: 1 00:17:06.621 00:17:06.621 Power Management 00:17:06.621 ================ 00:17:06.621 Number of Power States: 1 00:17:06.621 Current Power State: Power State #0 00:17:06.621 Power State #0: 00:17:06.621 Max Power: 0.00 W 00:17:06.621 Non-Operational State: Operational 00:17:06.621 Entry Latency: Not Reported 00:17:06.621 Exit Latency: Not Reported 00:17:06.621 Relative Read Throughput: 0 00:17:06.621 Relative Read Latency: 0 00:17:06.621 Relative Write Throughput: 0 00:17:06.621 Relative Write Latency: 0 00:17:06.621 Idle Power: Not Reported 00:17:06.621 Active Power: Not Reported 00:17:06.621 Non-Operational Permissive Mode: Not Supported 00:17:06.621 00:17:06.621 Health Information 00:17:06.621 ================== 00:17:06.621 Critical Warnings: 00:17:06.621 Available Spare Space: OK 00:17:06.621 Temperature: OK 00:17:06.621 Device Reliability: OK 00:17:06.621 Read Only: No 00:17:06.621 Volatile Memory Backup: OK 00:17:06.621 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:06.621 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:06.621 Available Spare: 0% 00:17:06.621 Available Sp[2024-12-09 10:27:44.323592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:06.621 [2024-12-09 10:27:44.323599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:06.621 [2024-12-09 10:27:44.323625] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:06.621 [2024-12-09 10:27:44.323633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.621 [2024-12-09 10:27:44.323639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.621 [2024-12-09 10:27:44.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.621 [2024-12-09 10:27:44.323649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.621 [2024-12-09 10:27:44.323773] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:06.621 [2024-12-09 10:27:44.323782] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:06.621 [2024-12-09 10:27:44.324786] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:06.621 [2024-12-09 10:27:44.324841] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:06.621 [2024-12-09 10:27:44.324848] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:06.621 [2024-12-09 10:27:44.325782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:06.621 [2024-12-09 10:27:44.325791] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:06.621 [2024-12-09 10:27:44.325843] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:06.621 [2024-12-09 10:27:44.326814] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:06.878 are Threshold: 0% 00:17:06.878 Life Percentage Used: 0% 00:17:06.878 Data Units Read: 0 00:17:06.878 Data Units Written: 0 00:17:06.878 Host Read Commands: 0 00:17:06.878 Host Write Commands: 0 00:17:06.878 Controller Busy Time: 0 minutes 00:17:06.878 Power Cycles: 0 00:17:06.878 Power On Hours: 0 hours 00:17:06.878 Unsafe Shutdowns: 0 00:17:06.878 Unrecoverable Media Errors: 0 00:17:06.878 Lifetime Error Log Entries: 0 00:17:06.878 Warning Temperature Time: 0 minutes 00:17:06.878 Critical Temperature Time: 0 minutes 00:17:06.878 00:17:06.878 Number of Queues 00:17:06.878 ================ 00:17:06.878 Number of I/O Submission Queues: 127 00:17:06.878 Number of I/O Completion Queues: 127 00:17:06.878 00:17:06.878 Active Namespaces 00:17:06.878 ================= 00:17:06.878 Namespace ID:1 00:17:06.878 Error Recovery Timeout: Unlimited 00:17:06.878 Command Set Identifier: NVM (00h) 00:17:06.878 Deallocate: Supported 00:17:06.878 Deallocated/Unwritten Error: Not Supported 00:17:06.878 Deallocated Read Value: Unknown 00:17:06.878 Deallocate in Write Zeroes: Not Supported 00:17:06.878 Deallocated Guard Field: 0xFFFF 00:17:06.878 Flush: Supported 00:17:06.878 Reservation: Supported 00:17:06.878 Namespace Sharing Capabilities: Multiple Controllers 00:17:06.878 Size (in LBAs): 131072 (0GiB) 00:17:06.878 Capacity (in LBAs): 131072 (0GiB) 00:17:06.878 Utilization (in LBAs): 131072 (0GiB) 00:17:06.878 NGUID: 9E69E4570A3C49C1B6786CCD9E64929B 00:17:06.878 UUID: 9e69e457-0a3c-49c1-b678-6ccd9e64929b 00:17:06.878 Thin Provisioning: Not Supported 00:17:06.878 Per-NS Atomic Units: Yes 00:17:06.878 Atomic Boundary Size (Normal): 0 00:17:06.878 Atomic Boundary Size (PFail): 0 00:17:06.878 Atomic Boundary Offset: 0 00:17:06.878 Maximum Single Source Range Length: 65535 00:17:06.878 Maximum Copy Length: 65535 00:17:06.878 Maximum Source Range Count: 1 00:17:06.878 NGUID/EUI64 Never Reused: No 00:17:06.878 Namespace Write Protected: No 00:17:06.878 Number of LBA Formats: 1 00:17:06.878 Current LBA Format: LBA Format #00 00:17:06.878 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:06.878 00:17:06.878 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:06.878 [2024-12-09 10:27:44.560863] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:12.129 Initializing NVMe Controllers 00:17:12.129 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:12.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:12.129 Initialization complete. Launching workers. 00:17:12.129 ======================================================== 00:17:12.129 Latency(us) 00:17:12.129 Device Information : IOPS MiB/s Average min max 00:17:12.129 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39958.70 156.09 3203.55 961.56 8601.99 00:17:12.129 ======================================================== 00:17:12.129 Total : 39958.70 156.09 3203.55 961.56 8601.99 00:17:12.129 00:17:12.129 [2024-12-09 10:27:49.582709] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:12.129 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:12.129 [2024-12-09 10:27:49.812766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:17.437 Initializing NVMe Controllers 00:17:17.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:17.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:17.437 Initialization complete. Launching workers. 00:17:17.437 ======================================================== 00:17:17.437 Latency(us) 00:17:17.437 Device Information : IOPS MiB/s Average min max 00:17:17.438 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.53 62.68 7976.63 5985.49 9973.46 00:17:17.438 ======================================================== 00:17:17.438 Total : 16045.53 62.68 7976.63 5985.49 9973.46 00:17:17.438 00:17:17.438 [2024-12-09 10:27:54.846063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:17.438 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:17.438 [2024-12-09 10:27:55.061085] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:22.689 [2024-12-09 10:28:00.173312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:22.689 Initializing NVMe Controllers 00:17:22.689 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:22.689 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:22.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:22.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:22.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:22.689 Initialization complete. Launching workers. 00:17:22.689 Starting thread on core 2 00:17:22.689 Starting thread on core 3 00:17:22.689 Starting thread on core 1 00:17:22.689 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:22.947 [2024-12-09 10:28:00.469186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:26.223 [2024-12-09 10:28:03.534659] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:26.223 Initializing NVMe Controllers 00:17:26.223 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.223 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.223 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:26.223 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:26.223 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:26.223 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:26.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:26.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:26.223 Initialization complete. Launching workers. 00:17:26.223 Starting thread on core 1 with urgent priority queue 00:17:26.223 Starting thread on core 2 with urgent priority queue 00:17:26.223 Starting thread on core 3 with urgent priority queue 00:17:26.223 Starting thread on core 0 with urgent priority queue 00:17:26.223 SPDK bdev Controller (SPDK1 ) core 0: 9522.67 IO/s 10.50 secs/100000 ios 00:17:26.223 SPDK bdev Controller (SPDK1 ) core 1: 9253.33 IO/s 10.81 secs/100000 ios 00:17:26.223 SPDK bdev Controller (SPDK1 ) core 2: 8417.33 IO/s 11.88 secs/100000 ios 00:17:26.223 SPDK bdev Controller (SPDK1 ) core 3: 7679.33 IO/s 13.02 secs/100000 ios 00:17:26.223 ======================================================== 00:17:26.223 00:17:26.223 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:26.223 [2024-12-09 10:28:03.814441] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:26.223 Initializing NVMe Controllers 00:17:26.223 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.223 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.223 Namespace ID: 1 size: 0GB 00:17:26.223 Initialization complete. 00:17:26.223 INFO: using host memory buffer for IO 00:17:26.223 Hello world! 00:17:26.223 [2024-12-09 10:28:03.847643] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:26.223 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:26.480 [2024-12-09 10:28:04.125992] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:27.858 Initializing NVMe Controllers 00:17:27.858 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:27.858 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:27.858 Initialization complete. Launching workers. 00:17:27.858 submit (in ns) avg, min, max = 7484.3, 3152.4, 3999805.7 00:17:27.858 complete (in ns) avg, min, max = 18917.8, 1713.3, 4174587.6 00:17:27.858 00:17:27.858 Submit histogram 00:17:27.858 ================ 00:17:27.858 Range in us Cumulative Count 00:17:27.858 3.139 - 3.154: 0.0060% ( 1) 00:17:27.858 3.154 - 3.170: 0.0181% ( 2) 00:17:27.858 3.170 - 3.185: 0.0783% ( 10) 00:17:27.858 3.185 - 3.200: 0.1326% ( 9) 00:17:27.858 3.200 - 3.215: 0.4097% ( 46) 00:17:27.858 3.215 - 3.230: 2.2295% ( 302) 00:17:27.858 3.230 - 3.246: 6.0858% ( 640) 00:17:27.858 3.246 - 3.261: 10.5688% ( 744) 00:17:27.858 3.261 - 3.276: 15.8171% ( 871) 00:17:27.858 3.276 - 3.291: 21.7402% ( 983) 00:17:27.858 3.291 - 3.307: 27.3319% ( 928) 00:17:27.858 3.307 - 3.322: 33.3876% ( 1005) 00:17:27.858 3.322 - 3.337: 39.2323% ( 970) 00:17:27.858 3.337 - 3.352: 45.1253% ( 978) 00:17:27.858 3.352 - 3.368: 50.6146% ( 911) 00:17:27.858 3.368 - 3.383: 57.6163% ( 1162) 00:17:27.858 3.383 - 3.398: 64.2444% ( 1100) 00:17:27.858 3.398 - 3.413: 69.1311% ( 811) 00:17:27.858 3.413 - 3.429: 74.2649% ( 852) 00:17:27.858 3.429 - 3.444: 78.6575% ( 729) 00:17:27.858 3.444 - 3.459: 81.7968% ( 521) 00:17:27.858 3.459 - 3.474: 84.1950% ( 398) 00:17:27.858 3.474 - 3.490: 85.8098% ( 268) 00:17:27.858 3.490 - 3.505: 86.8583% ( 174) 00:17:27.858 3.505 - 3.520: 87.7199% ( 143) 00:17:27.858 3.520 - 3.535: 88.4008% ( 113) 00:17:27.858 3.535 - 3.550: 89.0938% ( 115) 00:17:27.858 3.550 - 3.566: 89.8470% ( 125) 00:17:27.858 3.566 - 3.581: 90.7327% ( 147) 00:17:27.858 3.581 - 3.596: 91.5763% ( 140) 00:17:27.858 3.596 - 3.611: 92.5163% ( 156) 00:17:27.858 3.611 - 3.627: 93.4984% ( 163) 00:17:27.858 3.627 - 3.642: 94.5770% ( 179) 00:17:27.858 3.642 - 3.657: 95.5170% ( 156) 00:17:27.858 3.657 - 3.672: 96.4148% ( 149) 00:17:27.858 3.672 - 3.688: 97.1258% ( 118) 00:17:27.858 3.688 - 3.703: 97.7886% ( 110) 00:17:27.858 3.703 - 3.718: 98.2466% ( 76) 00:17:27.858 3.718 - 3.733: 98.6864% ( 73) 00:17:27.858 3.733 - 3.749: 99.0178% ( 55) 00:17:27.858 3.749 - 3.764: 99.2589% ( 40) 00:17:27.858 3.764 - 3.779: 99.4215% ( 27) 00:17:27.858 3.779 - 3.794: 99.4818% ( 10) 00:17:27.858 3.794 - 3.810: 99.5481% ( 11) 00:17:27.858 3.810 - 3.825: 99.6023% ( 9) 00:17:27.858 3.825 - 3.840: 99.6385% ( 6) 00:17:27.858 3.840 - 3.855: 99.6445% ( 1) 00:17:27.858 3.855 - 3.870: 99.6565% ( 2) 00:17:27.858 3.870 - 3.886: 99.6626% ( 1) 00:17:27.858 3.901 - 3.931: 99.6686% ( 1) 00:17:27.858 4.998 - 5.029: 99.6746% ( 1) 00:17:27.858 5.150 - 5.181: 99.6806% ( 1) 00:17:27.858 5.242 - 5.272: 99.6867% ( 1) 00:17:27.858 5.272 - 5.303: 99.6987% ( 2) 00:17:27.858 5.425 - 5.455: 99.7047% ( 1) 00:17:27.858 5.455 - 5.486: 99.7108% ( 1) 00:17:27.858 5.547 - 5.577: 99.7228% ( 2) 00:17:27.858 5.577 - 5.608: 99.7289% ( 1) 00:17:27.858 5.638 - 5.669: 99.7349% ( 1) 00:17:27.858 5.699 - 5.730: 99.7409% ( 1) 00:17:27.858 5.760 - 5.790: 99.7469% ( 1) 00:17:27.858 5.943 - 5.973: 99.7530% ( 1) 00:17:27.858 5.973 - 6.004: 99.7590% ( 1) 00:17:27.858 6.156 - 6.187: 99.7650% ( 1) 00:17:27.858 6.187 - 6.217: 99.7710% ( 1) 00:17:27.858 6.217 - 6.248: 99.7771% ( 1) 00:17:27.858 6.309 - 6.339: 99.7831% ( 1) 00:17:27.858 6.339 - 6.370: 99.7891% ( 1) 00:17:27.858 6.522 - 6.552: 99.7951% ( 1) 00:17:27.858 6.583 - 6.613: 99.8012% ( 1) 00:17:27.858 6.827 - 6.857: 99.8072% ( 1) 00:17:27.858 6.979 - 7.010: 99.8132% ( 1) 00:17:27.858 7.040 - 7.070: 99.8192% ( 1) 00:17:27.858 7.070 - 7.101: 99.8313% ( 2) 00:17:27.858 7.101 - 7.131: 99.8373% ( 1) 00:17:27.858 7.131 - 7.162: 99.8433% ( 1) 00:17:27.858 7.162 - 7.192: 99.8494% ( 1) 00:17:27.858 7.314 - 7.345: 99.8554% ( 1) 00:17:27.858 7.375 - 7.406: 99.8614% ( 1) 00:17:27.858 [2024-12-09 10:28:05.146948] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:27.858 7.436 - 7.467: 99.8674% ( 1) 00:17:27.858 7.467 - 7.497: 99.8735% ( 1) 00:17:27.858 7.771 - 7.802: 99.8795% ( 1) 00:17:27.858 8.533 - 8.594: 99.8855% ( 1) 00:17:27.858 13.714 - 13.775: 99.8915% ( 1) 00:17:27.858 39.985 - 40.229: 99.8976% ( 1) 00:17:27.858 3994.575 - 4025.783: 100.0000% ( 17) 00:17:27.858 00:17:27.858 Complete histogram 00:17:27.858 ================== 00:17:27.858 Range in us Cumulative Count 00:17:27.858 1.707 - 1.714: 0.0060% ( 1) 00:17:27.858 1.714 - 1.722: 0.0723% ( 11) 00:17:27.858 1.722 - 1.730: 0.3254% ( 42) 00:17:27.858 1.730 - 1.737: 0.6508% ( 54) 00:17:27.858 1.737 - 1.745: 0.7351% ( 14) 00:17:27.858 1.745 - 1.752: 0.7472% ( 2) 00:17:27.858 1.752 - 1.760: 0.7713% ( 4) 00:17:27.858 1.760 - 1.768: 1.9282% ( 192) 00:17:27.858 1.768 - 1.775: 12.6476% ( 1779) 00:17:27.858 1.775 - 1.783: 40.5459% ( 4630) 00:17:27.858 1.783 - 1.790: 60.4483% ( 3303) 00:17:27.858 1.790 - 1.798: 66.1304% ( 943) 00:17:27.858 1.798 - 1.806: 68.5768% ( 406) 00:17:27.858 1.806 - 1.813: 70.1735% ( 265) 00:17:27.858 1.813 - 1.821: 71.0231% ( 141) 00:17:27.858 1.821 - 1.829: 72.5235% ( 249) 00:17:27.858 1.829 - 1.836: 78.6515% ( 1017) 00:17:27.858 1.836 - 1.844: 87.7260% ( 1506) 00:17:27.858 1.844 - 1.851: 92.9200% ( 862) 00:17:27.858 1.851 - 1.859: 95.6134% ( 447) 00:17:27.858 1.859 - 1.867: 97.0234% ( 234) 00:17:27.858 1.867 - 1.874: 97.7344% ( 118) 00:17:27.858 1.874 - 1.882: 98.1080% ( 62) 00:17:27.858 1.882 - 1.890: 98.2767% ( 28) 00:17:27.858 1.890 - 1.897: 98.4153% ( 23) 00:17:27.858 1.897 - 1.905: 98.6201% ( 34) 00:17:27.858 1.905 - 1.912: 98.9214% ( 50) 00:17:27.858 1.912 - 1.920: 99.1866% ( 44) 00:17:27.858 1.920 - 1.928: 99.2649% ( 13) 00:17:27.858 1.928 - 1.935: 99.3312% ( 11) 00:17:27.858 1.935 - 1.943: 99.3432% ( 2) 00:17:27.858 1.943 - 1.950: 99.3553% ( 2) 00:17:27.858 1.950 - 1.966: 99.3794% ( 4) 00:17:27.858 1.966 - 1.981: 99.3854% ( 1) 00:17:27.858 1.981 - 1.996: 99.3914% ( 1) 00:17:27.858 2.088 - 2.103: 99.3974% ( 1) 00:17:27.858 2.118 - 2.133: 99.4035% ( 1) 00:17:27.858 2.164 - 2.179: 99.4095% ( 1) 00:17:27.858 2.255 - 2.270: 99.4155% ( 1) 00:17:27.858 2.377 - 2.392: 99.4215% ( 1) 00:17:27.858 3.322 - 3.337: 99.4276% ( 1) 00:17:27.858 3.413 - 3.429: 99.4336% ( 1) 00:17:27.858 3.429 - 3.444: 99.4396% ( 1) 00:17:27.858 3.657 - 3.672: 99.4456% ( 1) 00:17:27.858 3.672 - 3.688: 99.4517% ( 1) 00:17:27.858 3.840 - 3.855: 99.4577% ( 1) 00:17:27.858 3.901 - 3.931: 99.4637% ( 1) 00:17:27.858 3.962 - 3.992: 99.4698% ( 1) 00:17:27.858 4.084 - 4.114: 99.4758% ( 1) 00:17:27.858 4.267 - 4.297: 99.4818% ( 1) 00:17:27.858 4.480 - 4.510: 99.4878% ( 1) 00:17:27.858 4.663 - 4.693: 99.4999% ( 2) 00:17:27.858 5.029 - 5.059: 99.5059% ( 1) 00:17:27.858 5.090 - 5.120: 99.5119% ( 1) 00:17:27.858 5.272 - 5.303: 99.5180% ( 1) 00:17:27.858 5.455 - 5.486: 99.5240% ( 1) 00:17:27.858 5.547 - 5.577: 99.5300% ( 1) 00:17:27.858 5.608 - 5.638: 99.5360% ( 1) 00:17:27.858 5.882 - 5.912: 99.5421% ( 1) 00:17:27.858 6.248 - 6.278: 99.5481% ( 1) 00:17:27.858 7.101 - 7.131: 99.5541% ( 1) 00:17:27.858 12.130 - 12.190: 99.5601% ( 1) 00:17:27.858 12.251 - 12.312: 99.5662% ( 1) 00:17:27.858 142.385 - 143.360: 99.5722% ( 1) 00:17:27.858 3978.971 - 3994.575: 99.5782% ( 1) 00:17:27.858 3994.575 - 4025.783: 99.9940% ( 69) 00:17:27.858 4150.613 - 4181.821: 100.0000% ( 1) 00:17:27.858 00:17:27.858 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:27.858 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:27.858 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:27.858 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:27.858 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:27.858 [ 00:17:27.858 { 00:17:27.858 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:27.858 "subtype": "Discovery", 00:17:27.858 "listen_addresses": [], 00:17:27.858 "allow_any_host": true, 00:17:27.858 "hosts": [] 00:17:27.858 }, 00:17:27.858 { 00:17:27.858 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:27.858 "subtype": "NVMe", 00:17:27.858 "listen_addresses": [ 00:17:27.858 { 00:17:27.858 "trtype": "VFIOUSER", 00:17:27.858 "adrfam": "IPv4", 00:17:27.858 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:27.858 "trsvcid": "0" 00:17:27.858 } 00:17:27.858 ], 00:17:27.858 "allow_any_host": true, 00:17:27.858 "hosts": [], 00:17:27.858 "serial_number": "SPDK1", 00:17:27.858 "model_number": "SPDK bdev Controller", 00:17:27.858 "max_namespaces": 32, 00:17:27.858 "min_cntlid": 1, 00:17:27.858 "max_cntlid": 65519, 00:17:27.858 "namespaces": [ 00:17:27.858 { 00:17:27.858 "nsid": 1, 00:17:27.858 "bdev_name": "Malloc1", 00:17:27.858 "name": "Malloc1", 00:17:27.858 "nguid": "9E69E4570A3C49C1B6786CCD9E64929B", 00:17:27.858 "uuid": "9e69e457-0a3c-49c1-b678-6ccd9e64929b" 00:17:27.858 } 00:17:27.858 ] 00:17:27.858 }, 00:17:27.858 { 00:17:27.858 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:27.858 "subtype": "NVMe", 00:17:27.858 "listen_addresses": [ 00:17:27.858 { 00:17:27.858 "trtype": "VFIOUSER", 00:17:27.858 "adrfam": "IPv4", 00:17:27.858 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:27.859 "trsvcid": "0" 00:17:27.859 } 00:17:27.859 ], 00:17:27.859 "allow_any_host": true, 00:17:27.859 "hosts": [], 00:17:27.859 "serial_number": "SPDK2", 00:17:27.859 "model_number": "SPDK bdev Controller", 00:17:27.859 "max_namespaces": 32, 00:17:27.859 "min_cntlid": 1, 00:17:27.859 "max_cntlid": 65519, 00:17:27.859 "namespaces": [ 00:17:27.859 { 00:17:27.859 "nsid": 1, 00:17:27.859 "bdev_name": "Malloc2", 00:17:27.859 "name": "Malloc2", 00:17:27.859 "nguid": "0C16D9BD3970440F917513F44EA8F2D4", 00:17:27.859 "uuid": "0c16d9bd-3970-440f-9175-13f44ea8f2d4" 00:17:27.859 } 00:17:27.859 ] 00:17:27.859 } 00:17:27.859 ] 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2627012 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:27.859 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:27.859 [2024-12-09 10:28:05.536214] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:28.116 Malloc3 00:17:28.116 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:28.116 [2024-12-09 10:28:05.772968] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:28.116 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:28.116 Asynchronous Event Request test 00:17:28.116 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:28.116 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:28.116 Registering asynchronous event callbacks... 00:17:28.116 Starting namespace attribute notice tests for all controllers... 00:17:28.116 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:28.116 aer_cb - Changed Namespace 00:17:28.116 Cleaning up... 00:17:28.372 [ 00:17:28.372 { 00:17:28.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.372 "subtype": "Discovery", 00:17:28.372 "listen_addresses": [], 00:17:28.372 "allow_any_host": true, 00:17:28.372 "hosts": [] 00:17:28.372 }, 00:17:28.372 { 00:17:28.372 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:28.372 "subtype": "NVMe", 00:17:28.372 "listen_addresses": [ 00:17:28.372 { 00:17:28.372 "trtype": "VFIOUSER", 00:17:28.372 "adrfam": "IPv4", 00:17:28.372 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:28.372 "trsvcid": "0" 00:17:28.372 } 00:17:28.372 ], 00:17:28.372 "allow_any_host": true, 00:17:28.372 "hosts": [], 00:17:28.372 "serial_number": "SPDK1", 00:17:28.372 "model_number": "SPDK bdev Controller", 00:17:28.372 "max_namespaces": 32, 00:17:28.372 "min_cntlid": 1, 00:17:28.372 "max_cntlid": 65519, 00:17:28.372 "namespaces": [ 00:17:28.372 { 00:17:28.372 "nsid": 1, 00:17:28.372 "bdev_name": "Malloc1", 00:17:28.372 "name": "Malloc1", 00:17:28.372 "nguid": "9E69E4570A3C49C1B6786CCD9E64929B", 00:17:28.372 "uuid": "9e69e457-0a3c-49c1-b678-6ccd9e64929b" 00:17:28.372 }, 00:17:28.372 { 00:17:28.372 "nsid": 2, 00:17:28.372 "bdev_name": "Malloc3", 00:17:28.372 "name": "Malloc3", 00:17:28.372 "nguid": "798D9928FABD41DBA5EBD3E6C9512058", 00:17:28.372 "uuid": "798d9928-fabd-41db-a5eb-d3e6c9512058" 00:17:28.372 } 00:17:28.372 ] 00:17:28.372 }, 00:17:28.372 { 00:17:28.372 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:28.372 "subtype": "NVMe", 00:17:28.372 "listen_addresses": [ 00:17:28.372 { 00:17:28.372 "trtype": "VFIOUSER", 00:17:28.372 "adrfam": "IPv4", 00:17:28.372 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:28.372 "trsvcid": "0" 00:17:28.372 } 00:17:28.372 ], 00:17:28.372 "allow_any_host": true, 00:17:28.372 "hosts": [], 00:17:28.372 "serial_number": "SPDK2", 00:17:28.372 "model_number": "SPDK bdev Controller", 00:17:28.372 "max_namespaces": 32, 00:17:28.372 "min_cntlid": 1, 00:17:28.372 "max_cntlid": 65519, 00:17:28.372 "namespaces": [ 00:17:28.372 { 00:17:28.372 "nsid": 1, 00:17:28.372 "bdev_name": "Malloc2", 00:17:28.372 "name": "Malloc2", 00:17:28.372 "nguid": "0C16D9BD3970440F917513F44EA8F2D4", 00:17:28.372 "uuid": "0c16d9bd-3970-440f-9175-13f44ea8f2d4" 00:17:28.372 } 00:17:28.372 ] 00:17:28.372 } 00:17:28.372 ] 00:17:28.372 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2627012 00:17:28.372 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.372 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:28.372 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:28.372 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:28.372 [2024-12-09 10:28:06.027607] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:28.372 [2024-12-09 10:28:06.027639] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627152 ] 00:17:28.372 [2024-12-09 10:28:06.068172] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:28.372 [2024-12-09 10:28:06.073415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:28.372 [2024-12-09 10:28:06.073441] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f370dc5f000 00:17:28.372 [2024-12-09 10:28:06.074414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.075417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.076423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.077427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.078436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.079451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.080465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.081477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:28.372 [2024-12-09 10:28:06.082489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:28.372 [2024-12-09 10:28:06.082500] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f370dc54000 00:17:28.372 [2024-12-09 10:28:06.083416] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:28.372 [2024-12-09 10:28:06.092768] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:28.372 [2024-12-09 10:28:06.092792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:28.629 [2024-12-09 10:28:06.097884] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:28.629 [2024-12-09 10:28:06.097919] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:28.629 [2024-12-09 10:28:06.097992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:28.629 [2024-12-09 10:28:06.098004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:28.629 [2024-12-09 10:28:06.098009] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:28.629 [2024-12-09 10:28:06.098886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:28.629 [2024-12-09 10:28:06.098899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:28.629 [2024-12-09 10:28:06.098905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:28.629 [2024-12-09 10:28:06.099890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:28.629 [2024-12-09 10:28:06.099899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:28.629 [2024-12-09 10:28:06.099905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.100899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:28.629 [2024-12-09 10:28:06.100909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.101908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:28.629 [2024-12-09 10:28:06.101916] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:28.629 [2024-12-09 10:28:06.101921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.101926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.102034] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:28.629 [2024-12-09 10:28:06.102039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.102043] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:28.629 [2024-12-09 10:28:06.102919] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:28.629 [2024-12-09 10:28:06.103926] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:28.629 [2024-12-09 10:28:06.104939] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:28.629 [2024-12-09 10:28:06.105942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:28.629 [2024-12-09 10:28:06.105980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:28.629 [2024-12-09 10:28:06.106951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:28.629 [2024-12-09 10:28:06.106960] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:28.629 [2024-12-09 10:28:06.106964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.106981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:28.629 [2024-12-09 10:28:06.106988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.107002] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:28.629 [2024-12-09 10:28:06.107006] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:28.629 [2024-12-09 10:28:06.107009] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.629 [2024-12-09 10:28:06.107020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.115815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.115829] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:28.629 [2024-12-09 10:28:06.115834] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:28.629 [2024-12-09 10:28:06.115838] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:28.629 [2024-12-09 10:28:06.115842] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:28.629 [2024-12-09 10:28:06.115846] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:28.629 [2024-12-09 10:28:06.115850] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:28.629 [2024-12-09 10:28:06.115854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.115861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.115873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.123815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.123827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.629 [2024-12-09 10:28:06.123835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.629 [2024-12-09 10:28:06.123842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.629 [2024-12-09 10:28:06.123849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.629 [2024-12-09 10:28:06.123854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.123862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.123871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.131816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.131824] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:28.629 [2024-12-09 10:28:06.131829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.131834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.131840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.131848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.139814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.139869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.139876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.139883] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:28.629 [2024-12-09 10:28:06.139887] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:28.629 [2024-12-09 10:28:06.139890] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.629 [2024-12-09 10:28:06.139896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.147814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.147825] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:28.629 [2024-12-09 10:28:06.147840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.147849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.147856] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:28.629 [2024-12-09 10:28:06.147860] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:28.629 [2024-12-09 10:28:06.147863] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.629 [2024-12-09 10:28:06.147869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.155817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.155830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.155838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.155844] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:28.629 [2024-12-09 10:28:06.155848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:28.629 [2024-12-09 10:28:06.155851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.629 [2024-12-09 10:28:06.155857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:28.629 [2024-12-09 10:28:06.163813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:28.629 [2024-12-09 10:28:06.163822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.163828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:28.629 [2024-12-09 10:28:06.163835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:28.630 [2024-12-09 10:28:06.163842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:28.630 [2024-12-09 10:28:06.163847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:28.630 [2024-12-09 10:28:06.163851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:28.630 [2024-12-09 10:28:06.163856] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:28.630 [2024-12-09 10:28:06.163860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:28.630 [2024-12-09 10:28:06.163865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:28.630 [2024-12-09 10:28:06.163881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.171811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.171824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.179813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.179824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.187812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.187824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.195811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.195826] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:28.630 [2024-12-09 10:28:06.195830] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:28.630 [2024-12-09 10:28:06.195834] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:28.630 [2024-12-09 10:28:06.195837] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:28.630 [2024-12-09 10:28:06.195840] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:28.630 [2024-12-09 10:28:06.195846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:28.630 [2024-12-09 10:28:06.195852] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:28.630 [2024-12-09 10:28:06.195856] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:28.630 [2024-12-09 10:28:06.195859] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.630 [2024-12-09 10:28:06.195864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.195870] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:28.630 [2024-12-09 10:28:06.195874] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:28.630 [2024-12-09 10:28:06.195877] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.630 [2024-12-09 10:28:06.195882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.195889] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:28.630 [2024-12-09 10:28:06.195892] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:28.630 [2024-12-09 10:28:06.195895] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:28.630 [2024-12-09 10:28:06.195901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.203812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.203826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.203841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:28.630 ===================================================== 00:17:28.630 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:28.630 ===================================================== 00:17:28.630 Controller Capabilities/Features 00:17:28.630 ================================ 00:17:28.630 Vendor ID: 4e58 00:17:28.630 Subsystem Vendor ID: 4e58 00:17:28.630 Serial Number: SPDK2 00:17:28.630 Model Number: SPDK bdev Controller 00:17:28.630 Firmware Version: 25.01 00:17:28.630 Recommended Arb Burst: 6 00:17:28.630 IEEE OUI Identifier: 8d 6b 50 00:17:28.630 Multi-path I/O 00:17:28.630 May have multiple subsystem ports: Yes 00:17:28.630 May have multiple controllers: Yes 00:17:28.630 Associated with SR-IOV VF: No 00:17:28.630 Max Data Transfer Size: 131072 00:17:28.630 Max Number of Namespaces: 32 00:17:28.630 Max Number of I/O Queues: 127 00:17:28.630 NVMe Specification Version (VS): 1.3 00:17:28.630 NVMe Specification Version (Identify): 1.3 00:17:28.630 Maximum Queue Entries: 256 00:17:28.630 Contiguous Queues Required: Yes 00:17:28.630 Arbitration Mechanisms Supported 00:17:28.630 Weighted Round Robin: Not Supported 00:17:28.630 Vendor Specific: Not Supported 00:17:28.630 Reset Timeout: 15000 ms 00:17:28.630 Doorbell Stride: 4 bytes 00:17:28.630 NVM Subsystem Reset: Not Supported 00:17:28.630 Command Sets Supported 00:17:28.630 NVM Command Set: Supported 00:17:28.630 Boot Partition: Not Supported 00:17:28.630 Memory Page Size Minimum: 4096 bytes 00:17:28.630 Memory Page Size Maximum: 4096 bytes 00:17:28.630 Persistent Memory Region: Not Supported 00:17:28.630 Optional Asynchronous Events Supported 00:17:28.630 Namespace Attribute Notices: Supported 00:17:28.630 Firmware Activation Notices: Not Supported 00:17:28.630 ANA Change Notices: Not Supported 00:17:28.630 PLE Aggregate Log Change Notices: Not Supported 00:17:28.630 LBA Status Info Alert Notices: Not Supported 00:17:28.630 EGE Aggregate Log Change Notices: Not Supported 00:17:28.630 Normal NVM Subsystem Shutdown event: Not Supported 00:17:28.630 Zone Descriptor Change Notices: Not Supported 00:17:28.630 Discovery Log Change Notices: Not Supported 00:17:28.630 Controller Attributes 00:17:28.630 128-bit Host Identifier: Supported 00:17:28.630 Non-Operational Permissive Mode: Not Supported 00:17:28.630 NVM Sets: Not Supported 00:17:28.630 Read Recovery Levels: Not Supported 00:17:28.630 Endurance Groups: Not Supported 00:17:28.630 Predictable Latency Mode: Not Supported 00:17:28.630 Traffic Based Keep ALive: Not Supported 00:17:28.630 Namespace Granularity: Not Supported 00:17:28.630 SQ Associations: Not Supported 00:17:28.630 UUID List: Not Supported 00:17:28.630 Multi-Domain Subsystem: Not Supported 00:17:28.630 Fixed Capacity Management: Not Supported 00:17:28.630 Variable Capacity Management: Not Supported 00:17:28.630 Delete Endurance Group: Not Supported 00:17:28.630 Delete NVM Set: Not Supported 00:17:28.630 Extended LBA Formats Supported: Not Supported 00:17:28.630 Flexible Data Placement Supported: Not Supported 00:17:28.630 00:17:28.630 Controller Memory Buffer Support 00:17:28.630 ================================ 00:17:28.630 Supported: No 00:17:28.630 00:17:28.630 Persistent Memory Region Support 00:17:28.630 ================================ 00:17:28.630 Supported: No 00:17:28.630 00:17:28.630 Admin Command Set Attributes 00:17:28.630 ============================ 00:17:28.630 Security Send/Receive: Not Supported 00:17:28.630 Format NVM: Not Supported 00:17:28.630 Firmware Activate/Download: Not Supported 00:17:28.630 Namespace Management: Not Supported 00:17:28.630 Device Self-Test: Not Supported 00:17:28.630 Directives: Not Supported 00:17:28.630 NVMe-MI: Not Supported 00:17:28.630 Virtualization Management: Not Supported 00:17:28.630 Doorbell Buffer Config: Not Supported 00:17:28.630 Get LBA Status Capability: Not Supported 00:17:28.630 Command & Feature Lockdown Capability: Not Supported 00:17:28.630 Abort Command Limit: 4 00:17:28.630 Async Event Request Limit: 4 00:17:28.630 Number of Firmware Slots: N/A 00:17:28.630 Firmware Slot 1 Read-Only: N/A 00:17:28.630 Firmware Activation Without Reset: N/A 00:17:28.630 Multiple Update Detection Support: N/A 00:17:28.630 Firmware Update Granularity: No Information Provided 00:17:28.630 Per-Namespace SMART Log: No 00:17:28.630 Asymmetric Namespace Access Log Page: Not Supported 00:17:28.630 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:28.630 Command Effects Log Page: Supported 00:17:28.630 Get Log Page Extended Data: Supported 00:17:28.630 Telemetry Log Pages: Not Supported 00:17:28.630 Persistent Event Log Pages: Not Supported 00:17:28.630 Supported Log Pages Log Page: May Support 00:17:28.630 Commands Supported & Effects Log Page: Not Supported 00:17:28.630 Feature Identifiers & Effects Log Page:May Support 00:17:28.630 NVMe-MI Commands & Effects Log Page: May Support 00:17:28.630 Data Area 4 for Telemetry Log: Not Supported 00:17:28.630 Error Log Page Entries Supported: 128 00:17:28.630 Keep Alive: Supported 00:17:28.630 Keep Alive Granularity: 10000 ms 00:17:28.630 00:17:28.630 NVM Command Set Attributes 00:17:28.630 ========================== 00:17:28.630 Submission Queue Entry Size 00:17:28.630 Max: 64 00:17:28.630 Min: 64 00:17:28.630 Completion Queue Entry Size 00:17:28.630 Max: 16 00:17:28.630 Min: 16 00:17:28.630 Number of Namespaces: 32 00:17:28.630 Compare Command: Supported 00:17:28.630 Write Uncorrectable Command: Not Supported 00:17:28.630 Dataset Management Command: Supported 00:17:28.630 Write Zeroes Command: Supported 00:17:28.630 Set Features Save Field: Not Supported 00:17:28.630 Reservations: Not Supported 00:17:28.630 Timestamp: Not Supported 00:17:28.630 Copy: Supported 00:17:28.630 Volatile Write Cache: Present 00:17:28.630 Atomic Write Unit (Normal): 1 00:17:28.630 Atomic Write Unit (PFail): 1 00:17:28.630 Atomic Compare & Write Unit: 1 00:17:28.630 Fused Compare & Write: Supported 00:17:28.630 Scatter-Gather List 00:17:28.630 SGL Command Set: Supported (Dword aligned) 00:17:28.630 SGL Keyed: Not Supported 00:17:28.630 SGL Bit Bucket Descriptor: Not Supported 00:17:28.630 SGL Metadata Pointer: Not Supported 00:17:28.630 Oversized SGL: Not Supported 00:17:28.630 SGL Metadata Address: Not Supported 00:17:28.630 SGL Offset: Not Supported 00:17:28.630 Transport SGL Data Block: Not Supported 00:17:28.630 Replay Protected Memory Block: Not Supported 00:17:28.630 00:17:28.630 Firmware Slot Information 00:17:28.630 ========================= 00:17:28.630 Active slot: 1 00:17:28.630 Slot 1 Firmware Revision: 25.01 00:17:28.630 00:17:28.630 00:17:28.630 Commands Supported and Effects 00:17:28.630 ============================== 00:17:28.630 Admin Commands 00:17:28.630 -------------- 00:17:28.630 Get Log Page (02h): Supported 00:17:28.630 Identify (06h): Supported 00:17:28.630 Abort (08h): Supported 00:17:28.630 Set Features (09h): Supported 00:17:28.630 Get Features (0Ah): Supported 00:17:28.630 Asynchronous Event Request (0Ch): Supported 00:17:28.630 Keep Alive (18h): Supported 00:17:28.630 I/O Commands 00:17:28.630 ------------ 00:17:28.630 Flush (00h): Supported LBA-Change 00:17:28.630 Write (01h): Supported LBA-Change 00:17:28.630 Read (02h): Supported 00:17:28.630 Compare (05h): Supported 00:17:28.630 Write Zeroes (08h): Supported LBA-Change 00:17:28.630 Dataset Management (09h): Supported LBA-Change 00:17:28.630 Copy (19h): Supported LBA-Change 00:17:28.630 00:17:28.630 Error Log 00:17:28.630 ========= 00:17:28.630 00:17:28.630 Arbitration 00:17:28.630 =========== 00:17:28.630 Arbitration Burst: 1 00:17:28.630 00:17:28.630 Power Management 00:17:28.630 ================ 00:17:28.630 Number of Power States: 1 00:17:28.630 Current Power State: Power State #0 00:17:28.630 Power State #0: 00:17:28.630 Max Power: 0.00 W 00:17:28.630 Non-Operational State: Operational 00:17:28.630 Entry Latency: Not Reported 00:17:28.630 Exit Latency: Not Reported 00:17:28.630 Relative Read Throughput: 0 00:17:28.630 Relative Read Latency: 0 00:17:28.630 Relative Write Throughput: 0 00:17:28.630 Relative Write Latency: 0 00:17:28.630 Idle Power: Not Reported 00:17:28.630 Active Power: Not Reported 00:17:28.630 Non-Operational Permissive Mode: Not Supported 00:17:28.630 00:17:28.630 Health Information 00:17:28.630 ================== 00:17:28.630 Critical Warnings: 00:17:28.630 Available Spare Space: OK 00:17:28.630 Temperature: OK 00:17:28.630 Device Reliability: OK 00:17:28.630 Read Only: No 00:17:28.630 Volatile Memory Backup: OK 00:17:28.630 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:28.630 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:28.630 Available Spare: 0% 00:17:28.630 Available Sp[2024-12-09 10:28:06.203938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:28.630 [2024-12-09 10:28:06.209101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.209179] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:28.630 [2024-12-09 10:28:06.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.209195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.209200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.209206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.630 [2024-12-09 10:28:06.209882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:28.630 [2024-12-09 10:28:06.209893] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:28.630 [2024-12-09 10:28:06.210881] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:28.630 [2024-12-09 10:28:06.210922] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:28.630 [2024-12-09 10:28:06.210929] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:28.630 [2024-12-09 10:28:06.211891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:28.630 [2024-12-09 10:28:06.211902] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:28.630 [2024-12-09 10:28:06.211951] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:28.630 [2024-12-09 10:28:06.215813] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:28.630 are Threshold: 0% 00:17:28.630 Life Percentage Used: 0% 00:17:28.630 Data Units Read: 0 00:17:28.630 Data Units Written: 0 00:17:28.630 Host Read Commands: 0 00:17:28.630 Host Write Commands: 0 00:17:28.630 Controller Busy Time: 0 minutes 00:17:28.630 Power Cycles: 0 00:17:28.630 Power On Hours: 0 hours 00:17:28.630 Unsafe Shutdowns: 0 00:17:28.630 Unrecoverable Media Errors: 0 00:17:28.630 Lifetime Error Log Entries: 0 00:17:28.630 Warning Temperature Time: 0 minutes 00:17:28.630 Critical Temperature Time: 0 minutes 00:17:28.630 00:17:28.630 Number of Queues 00:17:28.631 ================ 00:17:28.631 Number of I/O Submission Queues: 127 00:17:28.631 Number of I/O Completion Queues: 127 00:17:28.631 00:17:28.631 Active Namespaces 00:17:28.631 ================= 00:17:28.631 Namespace ID:1 00:17:28.631 Error Recovery Timeout: Unlimited 00:17:28.631 Command Set Identifier: NVM (00h) 00:17:28.631 Deallocate: Supported 00:17:28.631 Deallocated/Unwritten Error: Not Supported 00:17:28.631 Deallocated Read Value: Unknown 00:17:28.631 Deallocate in Write Zeroes: Not Supported 00:17:28.631 Deallocated Guard Field: 0xFFFF 00:17:28.631 Flush: Supported 00:17:28.631 Reservation: Supported 00:17:28.631 Namespace Sharing Capabilities: Multiple Controllers 00:17:28.631 Size (in LBAs): 131072 (0GiB) 00:17:28.631 Capacity (in LBAs): 131072 (0GiB) 00:17:28.631 Utilization (in LBAs): 131072 (0GiB) 00:17:28.631 NGUID: 0C16D9BD3970440F917513F44EA8F2D4 00:17:28.631 UUID: 0c16d9bd-3970-440f-9175-13f44ea8f2d4 00:17:28.631 Thin Provisioning: Not Supported 00:17:28.631 Per-NS Atomic Units: Yes 00:17:28.631 Atomic Boundary Size (Normal): 0 00:17:28.631 Atomic Boundary Size (PFail): 0 00:17:28.631 Atomic Boundary Offset: 0 00:17:28.631 Maximum Single Source Range Length: 65535 00:17:28.631 Maximum Copy Length: 65535 00:17:28.631 Maximum Source Range Count: 1 00:17:28.631 NGUID/EUI64 Never Reused: No 00:17:28.631 Namespace Write Protected: No 00:17:28.631 Number of LBA Formats: 1 00:17:28.631 Current LBA Format: LBA Format #00 00:17:28.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:28.631 00:17:28.631 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:28.887 [2024-12-09 10:28:06.444037] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:34.158 Initializing NVMe Controllers 00:17:34.158 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:34.158 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:34.158 Initialization complete. Launching workers. 00:17:34.158 ======================================================== 00:17:34.158 Latency(us) 00:17:34.158 Device Information : IOPS MiB/s Average min max 00:17:34.158 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39832.17 155.59 3213.76 966.15 10598.92 00:17:34.158 ======================================================== 00:17:34.158 Total : 39832.17 155.59 3213.76 966.15 10598.92 00:17:34.158 00:17:34.158 [2024-12-09 10:28:11.546065] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:34.158 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:34.158 [2024-12-09 10:28:11.778778] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:39.415 Initializing NVMe Controllers 00:17:39.415 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:39.415 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:39.415 Initialization complete. Launching workers. 00:17:39.415 ======================================================== 00:17:39.415 Latency(us) 00:17:39.415 Device Information : IOPS MiB/s Average min max 00:17:39.415 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39963.78 156.11 3203.10 976.08 8578.53 00:17:39.415 ======================================================== 00:17:39.416 Total : 39963.78 156.11 3203.10 976.08 8578.53 00:17:39.416 00:17:39.416 [2024-12-09 10:28:16.796970] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:39.416 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:39.416 [2024-12-09 10:28:17.001222] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:44.672 [2024-12-09 10:28:22.134908] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:44.672 Initializing NVMe Controllers 00:17:44.672 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:44.672 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:44.672 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:44.672 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:44.672 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:44.672 Initialization complete. Launching workers. 00:17:44.672 Starting thread on core 2 00:17:44.672 Starting thread on core 3 00:17:44.672 Starting thread on core 1 00:17:44.672 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:44.929 [2024-12-09 10:28:22.424279] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:48.204 [2024-12-09 10:28:25.474998] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.204 Initializing NVMe Controllers 00:17:48.204 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.204 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:48.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:48.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:48.204 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:48.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:48.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:48.205 Initialization complete. Launching workers. 00:17:48.205 Starting thread on core 1 with urgent priority queue 00:17:48.205 Starting thread on core 2 with urgent priority queue 00:17:48.205 Starting thread on core 3 with urgent priority queue 00:17:48.205 Starting thread on core 0 with urgent priority queue 00:17:48.205 SPDK bdev Controller (SPDK2 ) core 0: 8561.00 IO/s 11.68 secs/100000 ios 00:17:48.205 SPDK bdev Controller (SPDK2 ) core 1: 9115.33 IO/s 10.97 secs/100000 ios 00:17:48.205 SPDK bdev Controller (SPDK2 ) core 2: 10556.33 IO/s 9.47 secs/100000 ios 00:17:48.205 SPDK bdev Controller (SPDK2 ) core 3: 8226.67 IO/s 12.16 secs/100000 ios 00:17:48.205 ======================================================== 00:17:48.205 00:17:48.205 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:48.205 [2024-12-09 10:28:25.755299] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:48.205 Initializing NVMe Controllers 00:17:48.205 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.205 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.205 Namespace ID: 1 size: 0GB 00:17:48.205 Initialization complete. 00:17:48.205 INFO: using host memory buffer for IO 00:17:48.205 Hello world! 00:17:48.205 [2024-12-09 10:28:25.765356] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.205 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:48.462 [2024-12-09 10:28:26.043575] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:49.834 Initializing NVMe Controllers 00:17:49.834 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:49.834 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:49.834 Initialization complete. Launching workers. 00:17:49.834 submit (in ns) avg, min, max = 5083.8, 3140.0, 3999899.0 00:17:49.834 complete (in ns) avg, min, max = 21229.9, 1715.2, 5992483.8 00:17:49.834 00:17:49.834 Submit histogram 00:17:49.834 ================ 00:17:49.834 Range in us Cumulative Count 00:17:49.834 3.139 - 3.154: 0.0181% ( 3) 00:17:49.834 3.154 - 3.170: 0.0301% ( 2) 00:17:49.834 3.170 - 3.185: 0.0663% ( 6) 00:17:49.834 3.185 - 3.200: 0.1506% ( 14) 00:17:49.834 3.200 - 3.215: 0.6508% ( 83) 00:17:49.834 3.215 - 3.230: 2.4043% ( 291) 00:17:49.834 3.230 - 3.246: 5.1642% ( 458) 00:17:49.834 3.246 - 3.261: 8.4363% ( 543) 00:17:49.834 3.261 - 3.276: 13.0401% ( 764) 00:17:49.834 3.276 - 3.291: 18.5478% ( 914) 00:17:49.834 3.291 - 3.307: 24.6761% ( 1017) 00:17:49.834 3.307 - 3.322: 31.0093% ( 1051) 00:17:49.834 3.322 - 3.337: 37.5173% ( 1080) 00:17:49.834 3.337 - 3.352: 42.8081% ( 878) 00:17:49.834 3.352 - 3.368: 47.9422% ( 852) 00:17:49.834 3.368 - 3.383: 54.9382% ( 1161) 00:17:49.834 3.383 - 3.398: 61.3679% ( 1067) 00:17:49.834 3.398 - 3.413: 66.2669% ( 813) 00:17:49.834 3.413 - 3.429: 71.7324% ( 907) 00:17:49.834 3.429 - 3.444: 77.3667% ( 935) 00:17:49.834 3.444 - 3.459: 81.3197% ( 656) 00:17:49.834 3.459 - 3.474: 84.1940% ( 477) 00:17:49.834 3.474 - 3.490: 86.3272% ( 354) 00:17:49.834 3.490 - 3.505: 87.6529% ( 220) 00:17:49.834 3.505 - 3.520: 88.6050% ( 158) 00:17:49.834 3.520 - 3.535: 89.2679% ( 110) 00:17:49.834 3.535 - 3.550: 89.9247% ( 109) 00:17:49.834 3.550 - 3.566: 90.4851% ( 93) 00:17:49.834 3.566 - 3.581: 91.0636% ( 96) 00:17:49.834 3.581 - 3.596: 91.8048% ( 123) 00:17:49.834 3.596 - 3.611: 92.6183% ( 135) 00:17:49.834 3.611 - 3.627: 93.4016% ( 130) 00:17:49.834 3.627 - 3.642: 94.4019% ( 166) 00:17:49.834 3.642 - 3.657: 95.2636% ( 143) 00:17:49.834 3.657 - 3.672: 96.0832% ( 136) 00:17:49.834 3.672 - 3.688: 96.7581% ( 112) 00:17:49.834 3.688 - 3.703: 97.5595% ( 133) 00:17:49.834 3.703 - 3.718: 98.0777% ( 86) 00:17:49.834 3.718 - 3.733: 98.5237% ( 74) 00:17:49.834 3.733 - 3.749: 98.8852% ( 60) 00:17:49.834 3.749 - 3.764: 99.0961% ( 35) 00:17:49.834 3.764 - 3.779: 99.2528% ( 26) 00:17:49.834 3.779 - 3.794: 99.3613% ( 18) 00:17:49.834 3.794 - 3.810: 99.4878% ( 21) 00:17:49.834 3.810 - 3.825: 99.5601% ( 12) 00:17:49.834 3.825 - 3.840: 99.5902% ( 5) 00:17:49.834 3.840 - 3.855: 99.6143% ( 4) 00:17:49.834 3.855 - 3.870: 99.6264% ( 2) 00:17:49.834 3.886 - 3.901: 99.6324% ( 1) 00:17:49.834 5.211 - 5.242: 99.6445% ( 2) 00:17:49.834 5.394 - 5.425: 99.6505% ( 1) 00:17:49.834 5.425 - 5.455: 99.6565% ( 1) 00:17:49.834 5.486 - 5.516: 99.6686% ( 2) 00:17:49.834 5.669 - 5.699: 99.6746% ( 1) 00:17:49.834 5.699 - 5.730: 99.6867% ( 2) 00:17:49.834 5.790 - 5.821: 99.6987% ( 2) 00:17:49.834 5.821 - 5.851: 99.7047% ( 1) 00:17:49.834 5.851 - 5.882: 99.7108% ( 1) 00:17:49.834 5.943 - 5.973: 99.7228% ( 2) 00:17:49.834 5.973 - 6.004: 99.7409% ( 3) 00:17:49.834 6.065 - 6.095: 99.7469% ( 1) 00:17:49.834 6.126 - 6.156: 99.7529% ( 1) 00:17:49.835 6.156 - 6.187: 99.7590% ( 1) 00:17:49.835 6.217 - 6.248: 99.7650% ( 1) 00:17:49.835 6.278 - 6.309: 99.7710% ( 1) 00:17:49.835 6.400 - 6.430: 99.7770% ( 1) 00:17:49.835 6.430 - 6.461: 99.7831% ( 1) 00:17:49.835 6.461 - 6.491: 99.7951% ( 2) 00:17:49.835 6.522 - 6.552: 99.8072% ( 2) 00:17:49.835 6.552 - 6.583: 99.8132% ( 1) 00:17:49.835 6.583 - 6.613: 99.8252% ( 2) 00:17:49.835 6.613 - 6.644: 99.8373% ( 2) 00:17:49.835 6.674 - 6.705: 99.8433% ( 1) 00:17:49.835 6.735 - 6.766: 99.8494% ( 1) 00:17:49.835 6.857 - 6.888: 99.8614% ( 2) 00:17:49.835 6.918 - 6.949: 99.8735% ( 2) 00:17:49.835 7.101 - 7.131: 99.8915% ( 3) 00:17:49.835 7.131 - 7.162: 99.8976% ( 1) 00:17:49.835 7.223 - 7.253: 99.9096% ( 2) 00:17:49.835 [2024-12-09 10:28:27.144819] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:49.835 7.314 - 7.345: 99.9156% ( 1) 00:17:49.835 7.771 - 7.802: 99.9277% ( 2) 00:17:49.835 7.863 - 7.924: 99.9337% ( 1) 00:17:49.835 7.985 - 8.046: 99.9397% ( 1) 00:17:49.835 9.082 - 9.143: 99.9458% ( 1) 00:17:49.835 9.143 - 9.204: 99.9518% ( 1) 00:17:49.835 19.017 - 19.139: 99.9578% ( 1) 00:17:49.835 3994.575 - 4025.783: 100.0000% ( 7) 00:17:49.835 00:17:49.835 Complete histogram 00:17:49.835 ================== 00:17:49.835 Range in us Cumulative Count 00:17:49.835 1.714 - 1.722: 0.0542% ( 9) 00:17:49.835 1.722 - 1.730: 0.2892% ( 39) 00:17:49.835 1.730 - 1.737: 0.6146% ( 54) 00:17:49.835 1.737 - 1.745: 0.7834% ( 28) 00:17:49.835 1.745 - 1.752: 0.8376% ( 9) 00:17:49.835 1.752 - 1.760: 0.8497% ( 2) 00:17:49.835 1.760 - 1.768: 1.2715% ( 70) 00:17:49.835 1.768 - 1.775: 7.9542% ( 1109) 00:17:49.835 1.775 - 1.783: 33.5402% ( 4246) 00:17:49.835 1.783 - 1.790: 62.8683% ( 4867) 00:17:49.835 1.790 - 1.798: 74.8298% ( 1985) 00:17:49.835 1.798 - 1.806: 78.3489% ( 584) 00:17:49.835 1.806 - 1.813: 80.4941% ( 356) 00:17:49.835 1.813 - 1.821: 81.9042% ( 234) 00:17:49.835 1.821 - 1.829: 83.9289% ( 336) 00:17:49.835 1.829 - 1.836: 88.1832% ( 706) 00:17:49.835 1.836 - 1.844: 92.8894% ( 781) 00:17:49.835 1.844 - 1.851: 95.6372% ( 456) 00:17:49.835 1.851 - 1.859: 97.1317% ( 248) 00:17:49.835 1.859 - 1.867: 98.2163% ( 180) 00:17:49.835 1.867 - 1.874: 98.6984% ( 80) 00:17:49.835 1.874 - 1.882: 98.9756% ( 46) 00:17:49.835 1.882 - 1.890: 99.1383% ( 27) 00:17:49.835 1.890 - 1.897: 99.2166% ( 13) 00:17:49.835 1.897 - 1.905: 99.2588% ( 7) 00:17:49.835 1.905 - 1.912: 99.2829% ( 4) 00:17:49.835 1.912 - 1.920: 99.2950% ( 2) 00:17:49.835 1.920 - 1.928: 99.3130% ( 3) 00:17:49.835 1.928 - 1.935: 99.3251% ( 2) 00:17:49.835 1.935 - 1.943: 99.3552% ( 5) 00:17:49.835 1.943 - 1.950: 99.3613% ( 1) 00:17:49.835 1.950 - 1.966: 99.3914% ( 5) 00:17:49.835 1.996 - 2.011: 99.3974% ( 1) 00:17:49.835 2.072 - 2.088: 99.4034% ( 1) 00:17:49.835 2.179 - 2.194: 99.4095% ( 1) 00:17:49.835 4.023 - 4.053: 99.4155% ( 1) 00:17:49.835 4.145 - 4.175: 99.4215% ( 1) 00:17:49.835 4.206 - 4.236: 99.4336% ( 2) 00:17:49.835 5.150 - 5.181: 99.4396% ( 1) 00:17:49.835 5.425 - 5.455: 99.4456% ( 1) 00:17:49.835 5.455 - 5.486: 99.4516% ( 1) 00:17:49.835 5.547 - 5.577: 99.4577% ( 1) 00:17:49.835 5.669 - 5.699: 99.4637% ( 1) 00:17:49.835 5.760 - 5.790: 99.4697% ( 1) 00:17:49.835 6.309 - 6.339: 99.4757% ( 1) 00:17:49.835 6.583 - 6.613: 99.4818% ( 1) 00:17:49.835 8.107 - 8.168: 99.4878% ( 1) 00:17:49.835 10.362 - 10.423: 99.4938% ( 1) 00:17:49.835 12.312 - 12.373: 99.4998% ( 1) 00:17:49.835 14.324 - 14.385: 99.5059% ( 1) 00:17:49.835 38.766 - 39.010: 99.5119% ( 1) 00:17:49.835 1146.880 - 1154.682: 99.5179% ( 1) 00:17:49.835 1708.617 - 1716.419: 99.5240% ( 1) 00:17:49.835 3994.575 - 4025.783: 99.9879% ( 77) 00:17:49.835 5960.655 - 5991.863: 99.9940% ( 1) 00:17:49.835 5991.863 - 6023.070: 100.0000% ( 1) 00:17:49.835 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:49.835 [ 00:17:49.835 { 00:17:49.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:49.835 "subtype": "Discovery", 00:17:49.835 "listen_addresses": [], 00:17:49.835 "allow_any_host": true, 00:17:49.835 "hosts": [] 00:17:49.835 }, 00:17:49.835 { 00:17:49.835 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:49.835 "subtype": "NVMe", 00:17:49.835 "listen_addresses": [ 00:17:49.835 { 00:17:49.835 "trtype": "VFIOUSER", 00:17:49.835 "adrfam": "IPv4", 00:17:49.835 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:49.835 "trsvcid": "0" 00:17:49.835 } 00:17:49.835 ], 00:17:49.835 "allow_any_host": true, 00:17:49.835 "hosts": [], 00:17:49.835 "serial_number": "SPDK1", 00:17:49.835 "model_number": "SPDK bdev Controller", 00:17:49.835 "max_namespaces": 32, 00:17:49.835 "min_cntlid": 1, 00:17:49.835 "max_cntlid": 65519, 00:17:49.835 "namespaces": [ 00:17:49.835 { 00:17:49.835 "nsid": 1, 00:17:49.835 "bdev_name": "Malloc1", 00:17:49.835 "name": "Malloc1", 00:17:49.835 "nguid": "9E69E4570A3C49C1B6786CCD9E64929B", 00:17:49.835 "uuid": "9e69e457-0a3c-49c1-b678-6ccd9e64929b" 00:17:49.835 }, 00:17:49.835 { 00:17:49.835 "nsid": 2, 00:17:49.835 "bdev_name": "Malloc3", 00:17:49.835 "name": "Malloc3", 00:17:49.835 "nguid": "798D9928FABD41DBA5EBD3E6C9512058", 00:17:49.835 "uuid": "798d9928-fabd-41db-a5eb-d3e6c9512058" 00:17:49.835 } 00:17:49.835 ] 00:17:49.835 }, 00:17:49.835 { 00:17:49.835 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:49.835 "subtype": "NVMe", 00:17:49.835 "listen_addresses": [ 00:17:49.835 { 00:17:49.835 "trtype": "VFIOUSER", 00:17:49.835 "adrfam": "IPv4", 00:17:49.835 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:49.835 "trsvcid": "0" 00:17:49.835 } 00:17:49.835 ], 00:17:49.835 "allow_any_host": true, 00:17:49.835 "hosts": [], 00:17:49.835 "serial_number": "SPDK2", 00:17:49.835 "model_number": "SPDK bdev Controller", 00:17:49.835 "max_namespaces": 32, 00:17:49.835 "min_cntlid": 1, 00:17:49.835 "max_cntlid": 65519, 00:17:49.835 "namespaces": [ 00:17:49.835 { 00:17:49.835 "nsid": 1, 00:17:49.835 "bdev_name": "Malloc2", 00:17:49.835 "name": "Malloc2", 00:17:49.835 "nguid": "0C16D9BD3970440F917513F44EA8F2D4", 00:17:49.835 "uuid": "0c16d9bd-3970-440f-9175-13f44ea8f2d4" 00:17:49.835 } 00:17:49.835 ] 00:17:49.835 } 00:17:49.835 ] 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2630685 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:49.835 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:49.835 [2024-12-09 10:28:27.534481] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:50.091 Malloc4 00:17:50.091 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:50.091 [2024-12-09 10:28:27.768217] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:50.091 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:50.091 Asynchronous Event Request test 00:17:50.091 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.091 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.091 Registering asynchronous event callbacks... 00:17:50.091 Starting namespace attribute notice tests for all controllers... 00:17:50.091 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:50.091 aer_cb - Changed Namespace 00:17:50.091 Cleaning up... 00:17:50.381 [ 00:17:50.381 { 00:17:50.381 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:50.381 "subtype": "Discovery", 00:17:50.381 "listen_addresses": [], 00:17:50.381 "allow_any_host": true, 00:17:50.381 "hosts": [] 00:17:50.381 }, 00:17:50.381 { 00:17:50.381 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:50.381 "subtype": "NVMe", 00:17:50.381 "listen_addresses": [ 00:17:50.381 { 00:17:50.381 "trtype": "VFIOUSER", 00:17:50.381 "adrfam": "IPv4", 00:17:50.381 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:50.381 "trsvcid": "0" 00:17:50.381 } 00:17:50.381 ], 00:17:50.381 "allow_any_host": true, 00:17:50.381 "hosts": [], 00:17:50.381 "serial_number": "SPDK1", 00:17:50.381 "model_number": "SPDK bdev Controller", 00:17:50.381 "max_namespaces": 32, 00:17:50.381 "min_cntlid": 1, 00:17:50.381 "max_cntlid": 65519, 00:17:50.381 "namespaces": [ 00:17:50.381 { 00:17:50.381 "nsid": 1, 00:17:50.381 "bdev_name": "Malloc1", 00:17:50.381 "name": "Malloc1", 00:17:50.381 "nguid": "9E69E4570A3C49C1B6786CCD9E64929B", 00:17:50.381 "uuid": "9e69e457-0a3c-49c1-b678-6ccd9e64929b" 00:17:50.381 }, 00:17:50.381 { 00:17:50.381 "nsid": 2, 00:17:50.381 "bdev_name": "Malloc3", 00:17:50.381 "name": "Malloc3", 00:17:50.381 "nguid": "798D9928FABD41DBA5EBD3E6C9512058", 00:17:50.381 "uuid": "798d9928-fabd-41db-a5eb-d3e6c9512058" 00:17:50.381 } 00:17:50.381 ] 00:17:50.381 }, 00:17:50.381 { 00:17:50.381 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:50.381 "subtype": "NVMe", 00:17:50.381 "listen_addresses": [ 00:17:50.381 { 00:17:50.381 "trtype": "VFIOUSER", 00:17:50.381 "adrfam": "IPv4", 00:17:50.381 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:50.381 "trsvcid": "0" 00:17:50.381 } 00:17:50.381 ], 00:17:50.381 "allow_any_host": true, 00:17:50.381 "hosts": [], 00:17:50.381 "serial_number": "SPDK2", 00:17:50.381 "model_number": "SPDK bdev Controller", 00:17:50.381 "max_namespaces": 32, 00:17:50.381 "min_cntlid": 1, 00:17:50.381 "max_cntlid": 65519, 00:17:50.381 "namespaces": [ 00:17:50.381 { 00:17:50.381 "nsid": 1, 00:17:50.381 "bdev_name": "Malloc2", 00:17:50.381 "name": "Malloc2", 00:17:50.381 "nguid": "0C16D9BD3970440F917513F44EA8F2D4", 00:17:50.381 "uuid": "0c16d9bd-3970-440f-9175-13f44ea8f2d4" 00:17:50.381 }, 00:17:50.381 { 00:17:50.381 "nsid": 2, 00:17:50.381 "bdev_name": "Malloc4", 00:17:50.381 "name": "Malloc4", 00:17:50.381 "nguid": "02EF53CEC15A447E99FBB600DBA750FA", 00:17:50.381 "uuid": "02ef53ce-c15a-447e-99fb-b600dba750fa" 00:17:50.381 } 00:17:50.381 ] 00:17:50.381 } 00:17:50.381 ] 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2630685 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2623073 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2623073 ']' 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2623073 00:17:50.381 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623073 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623073' 00:17:50.381 killing process with pid 2623073 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2623073 00:17:50.381 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2623073 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2630737 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2630737' 00:17:50.713 Process pid: 2630737 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2630737 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2630737 ']' 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.713 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:50.713 [2024-12-09 10:28:28.336889] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:50.713 [2024-12-09 10:28:28.337757] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:50.713 [2024-12-09 10:28:28.337801] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.713 [2024-12-09 10:28:28.415619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.011 [2024-12-09 10:28:28.456978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.011 [2024-12-09 10:28:28.457014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.011 [2024-12-09 10:28:28.457022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.011 [2024-12-09 10:28:28.457029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.011 [2024-12-09 10:28:28.457035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.011 [2024-12-09 10:28:28.458544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.011 [2024-12-09 10:28:28.458650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.011 [2024-12-09 10:28:28.458733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.011 [2024-12-09 10:28:28.458733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.011 [2024-12-09 10:28:28.527071] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:51.011 [2024-12-09 10:28:28.527077] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:51.011 [2024-12-09 10:28:28.527829] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:51.011 [2024-12-09 10:28:28.527861] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:51.011 [2024-12-09 10:28:28.527930] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:51.011 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.011 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:51.011 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:51.946 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:52.204 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:52.204 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:52.204 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.204 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:52.204 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:52.462 Malloc1 00:17:52.462 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:52.719 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:52.719 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:52.975 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.975 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:52.975 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:53.231 Malloc2 00:17:53.231 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:53.488 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2630737 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2630737 ']' 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2630737 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.745 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630737 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630737' 00:17:54.002 killing process with pid 2630737 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2630737 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2630737 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:54.002 00:17:54.002 real 0m50.817s 00:17:54.002 user 3m16.427s 00:17:54.002 sys 0m3.215s 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.002 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:54.002 ************************************ 00:17:54.002 END TEST nvmf_vfio_user 00:17:54.002 ************************************ 00:17:54.261 10:28:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:54.261 10:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.261 10:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.261 10:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.261 ************************************ 00:17:54.261 START TEST nvmf_vfio_user_nvme_compliance 00:17:54.261 ************************************ 00:17:54.261 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:54.261 * Looking for test storage... 00:17:54.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.262 --rc genhtml_branch_coverage=1 00:17:54.262 --rc genhtml_function_coverage=1 00:17:54.262 --rc genhtml_legend=1 00:17:54.262 --rc geninfo_all_blocks=1 00:17:54.262 --rc geninfo_unexecuted_blocks=1 00:17:54.262 00:17:54.262 ' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.262 --rc genhtml_branch_coverage=1 00:17:54.262 --rc genhtml_function_coverage=1 00:17:54.262 --rc genhtml_legend=1 00:17:54.262 --rc geninfo_all_blocks=1 00:17:54.262 --rc geninfo_unexecuted_blocks=1 00:17:54.262 00:17:54.262 ' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.262 --rc genhtml_branch_coverage=1 00:17:54.262 --rc genhtml_function_coverage=1 00:17:54.262 --rc genhtml_legend=1 00:17:54.262 --rc geninfo_all_blocks=1 00:17:54.262 --rc geninfo_unexecuted_blocks=1 00:17:54.262 00:17:54.262 ' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.262 --rc genhtml_branch_coverage=1 00:17:54.262 --rc genhtml_function_coverage=1 00:17:54.262 --rc genhtml_legend=1 00:17:54.262 --rc geninfo_all_blocks=1 00:17:54.262 --rc geninfo_unexecuted_blocks=1 00:17:54.262 00:17:54.262 ' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.262 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:54.263 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2631475 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2631475' 00:17:54.521 Process pid: 2631475 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2631475 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2631475 ']' 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.521 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.521 [2024-12-09 10:28:32.032704] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:54.521 [2024-12-09 10:28:32.032753] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.521 [2024-12-09 10:28:32.106764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.521 [2024-12-09 10:28:32.145580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.521 [2024-12-09 10:28:32.145618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.521 [2024-12-09 10:28:32.145626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.521 [2024-12-09 10:28:32.145632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.521 [2024-12-09 10:28:32.145637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.521 [2024-12-09 10:28:32.146947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.521 [2024-12-09 10:28:32.147053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.521 [2024-12-09 10:28:32.147054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.778 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.778 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:54.778 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:55.710 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 malloc0 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.711 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:55.968 00:17:55.968 00:17:55.968 CUnit - A unit testing framework for C - Version 2.1-3 00:17:55.968 http://cunit.sourceforge.net/ 00:17:55.968 00:17:55.968 00:17:55.968 Suite: nvme_compliance 00:17:55.968 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 10:28:33.494271] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.968 [2024-12-09 10:28:33.495606] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:55.968 [2024-12-09 10:28:33.495623] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:55.968 [2024-12-09 10:28:33.495630] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:55.968 [2024-12-09 10:28:33.497290] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.968 passed 00:17:55.968 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 10:28:33.575848] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.968 [2024-12-09 10:28:33.578864] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.968 passed 00:17:55.968 Test: admin_identify_ns ...[2024-12-09 10:28:33.658027] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.226 [2024-12-09 10:28:33.718819] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:56.226 [2024-12-09 10:28:33.726818] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:56.226 [2024-12-09 10:28:33.747898] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.226 passed 00:17:56.226 Test: admin_get_features_mandatory_features ...[2024-12-09 10:28:33.821388] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.226 [2024-12-09 10:28:33.824412] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.226 passed 00:17:56.226 Test: admin_get_features_optional_features ...[2024-12-09 10:28:33.902935] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.226 [2024-12-09 10:28:33.905952] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.226 passed 00:17:56.483 Test: admin_set_features_number_of_queues ...[2024-12-09 10:28:33.980664] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.483 [2024-12-09 10:28:34.085904] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.483 passed 00:17:56.483 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 10:28:34.161517] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.483 [2024-12-09 10:28:34.164538] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.483 passed 00:17:56.741 Test: admin_get_log_page_with_lpo ...[2024-12-09 10:28:34.233241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.741 [2024-12-09 10:28:34.301820] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:56.741 [2024-12-09 10:28:34.314889] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.741 passed 00:17:56.741 Test: fabric_property_get ...[2024-12-09 10:28:34.391502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.741 [2024-12-09 10:28:34.392737] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:56.741 [2024-12-09 10:28:34.394521] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.741 passed 00:17:56.999 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 10:28:34.469007] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.999 [2024-12-09 10:28:34.470238] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:56.999 [2024-12-09 10:28:34.472024] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.999 passed 00:17:56.999 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 10:28:34.547036] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.999 [2024-12-09 10:28:34.634822] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.999 [2024-12-09 10:28:34.650815] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.999 [2024-12-09 10:28:34.655920] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.999 passed 00:17:57.257 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 10:28:34.728604] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.257 [2024-12-09 10:28:34.729836] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:57.257 [2024-12-09 10:28:34.733641] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.257 passed 00:17:57.257 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 10:28:34.807261] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.257 [2024-12-09 10:28:34.882817] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:57.257 [2024-12-09 10:28:34.906820] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:57.257 [2024-12-09 10:28:34.911918] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.257 passed 00:17:57.513 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 10:28:34.986472] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.513 [2024-12-09 10:28:34.987703] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:57.513 [2024-12-09 10:28:34.987727] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:57.513 [2024-12-09 10:28:34.989500] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.513 passed 00:17:57.513 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 10:28:35.066056] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.513 [2024-12-09 10:28:35.157817] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:57.513 [2024-12-09 10:28:35.165822] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:57.513 [2024-12-09 10:28:35.173819] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:57.513 [2024-12-09 10:28:35.181811] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:57.513 [2024-12-09 10:28:35.213912] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.769 passed 00:17:57.769 Test: admin_create_io_sq_verify_pc ...[2024-12-09 10:28:35.287812] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.769 [2024-12-09 10:28:35.304824] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:57.769 [2024-12-09 10:28:35.322090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.769 passed 00:17:57.769 Test: admin_create_io_qp_max_qps ...[2024-12-09 10:28:35.399588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.139 [2024-12-09 10:28:36.499818] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:59.396 [2024-12-09 10:28:36.891832] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.396 passed 00:17:59.397 Test: admin_create_io_sq_shared_cq ...[2024-12-09 10:28:36.969082] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.397 [2024-12-09 10:28:37.101823] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:59.654 [2024-12-09 10:28:37.138863] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.654 passed 00:17:59.654 00:17:59.654 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.654 suites 1 1 n/a 0 0 00:17:59.654 tests 18 18 18 0 0 00:17:59.654 asserts 360 360 360 0 n/a 00:17:59.654 00:17:59.654 Elapsed time = 1.498 seconds 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2631475 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2631475 ']' 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2631475 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631475 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631475' 00:17:59.654 killing process with pid 2631475 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2631475 00:17:59.654 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2631475 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:59.912 00:17:59.912 real 0m5.643s 00:17:59.912 user 0m15.773s 00:17:59.912 sys 0m0.501s 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.912 ************************************ 00:17:59.912 END TEST nvmf_vfio_user_nvme_compliance 00:17:59.912 ************************************ 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.912 ************************************ 00:17:59.912 START TEST nvmf_vfio_user_fuzz 00:17:59.912 ************************************ 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:59.912 * Looking for test storage... 00:17:59.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.912 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:00.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.174 --rc genhtml_branch_coverage=1 00:18:00.174 --rc genhtml_function_coverage=1 00:18:00.174 --rc genhtml_legend=1 00:18:00.174 --rc geninfo_all_blocks=1 00:18:00.174 --rc geninfo_unexecuted_blocks=1 00:18:00.174 00:18:00.174 ' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:00.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.174 --rc genhtml_branch_coverage=1 00:18:00.174 --rc genhtml_function_coverage=1 00:18:00.174 --rc genhtml_legend=1 00:18:00.174 --rc geninfo_all_blocks=1 00:18:00.174 --rc geninfo_unexecuted_blocks=1 00:18:00.174 00:18:00.174 ' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:00.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.174 --rc genhtml_branch_coverage=1 00:18:00.174 --rc genhtml_function_coverage=1 00:18:00.174 --rc genhtml_legend=1 00:18:00.174 --rc geninfo_all_blocks=1 00:18:00.174 --rc geninfo_unexecuted_blocks=1 00:18:00.174 00:18:00.174 ' 00:18:00.174 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:00.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.175 --rc genhtml_branch_coverage=1 00:18:00.175 --rc genhtml_function_coverage=1 00:18:00.175 --rc genhtml_legend=1 00:18:00.175 --rc geninfo_all_blocks=1 00:18:00.175 --rc geninfo_unexecuted_blocks=1 00:18:00.175 00:18:00.175 ' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2632463 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2632463' 00:18:00.175 Process pid: 2632463 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2632463 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2632463 ']' 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.175 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.433 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.433 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:00.433 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.366 malloc0 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.366 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:01.366 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:33.412 Fuzzing completed. Shutting down the fuzz application 00:18:33.412 00:18:33.412 Dumping successful admin opcodes: 00:18:33.412 9, 10, 00:18:33.412 Dumping successful io opcodes: 00:18:33.412 0, 00:18:33.412 NS: 0x20000081ef00 I/O qp, Total commands completed: 1003630, total successful commands: 3933, random_seed: 2525811136 00:18:33.412 NS: 0x20000081ef00 admin qp, Total commands completed: 242784, total successful commands: 56, random_seed: 1400333312 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2632463 ']' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632463' 00:18:33.412 killing process with pid 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2632463 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:33.412 00:18:33.412 real 0m32.224s 00:18:33.412 user 0m30.044s 00:18:33.412 sys 0m30.714s 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.412 ************************************ 00:18:33.412 END TEST nvmf_vfio_user_fuzz 00:18:33.412 ************************************ 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.412 ************************************ 00:18:33.412 START TEST nvmf_auth_target 00:18:33.412 ************************************ 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.412 * Looking for test storage... 00:18:33.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.412 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:33.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.412 --rc genhtml_branch_coverage=1 00:18:33.412 --rc genhtml_function_coverage=1 00:18:33.412 --rc genhtml_legend=1 00:18:33.412 --rc geninfo_all_blocks=1 00:18:33.412 --rc geninfo_unexecuted_blocks=1 00:18:33.412 00:18:33.412 ' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.413 --rc genhtml_branch_coverage=1 00:18:33.413 --rc genhtml_function_coverage=1 00:18:33.413 --rc genhtml_legend=1 00:18:33.413 --rc geninfo_all_blocks=1 00:18:33.413 --rc geninfo_unexecuted_blocks=1 00:18:33.413 00:18:33.413 ' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.413 --rc genhtml_branch_coverage=1 00:18:33.413 --rc genhtml_function_coverage=1 00:18:33.413 --rc genhtml_legend=1 00:18:33.413 --rc geninfo_all_blocks=1 00:18:33.413 --rc geninfo_unexecuted_blocks=1 00:18:33.413 00:18:33.413 ' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.413 --rc genhtml_branch_coverage=1 00:18:33.413 --rc genhtml_function_coverage=1 00:18:33.413 --rc genhtml_legend=1 00:18:33.413 --rc geninfo_all_blocks=1 00:18:33.413 --rc geninfo_unexecuted_blocks=1 00:18:33.413 00:18:33.413 ' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.413 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.413 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:33.413 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:33.413 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:33.413 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:38.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:38.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:38.683 Found net devices under 0000:86:00.0: cvl_0_0 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.683 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:38.684 Found net devices under 0000:86:00.1: cvl_0_1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:38.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:18:38.684 00:18:38.684 --- 10.0.0.2 ping statistics --- 00:18:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.684 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:18:38.684 00:18:38.684 --- 10.0.0.1 ping statistics --- 00:18:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.684 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2640764 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2640764 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2640764 ']' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.684 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2640865 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=21825c0ef7db4150cceb3a2f5ca3d24c25070c0747510a99 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sIJ 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 21825c0ef7db4150cceb3a2f5ca3d24c25070c0747510a99 0 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 21825c0ef7db4150cceb3a2f5ca3d24c25070c0747510a99 0 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=21825c0ef7db4150cceb3a2f5ca3d24c25070c0747510a99 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sIJ 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sIJ 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.sIJ 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.684 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=39e9ef116b81de5ff22389512e445f0aa2533c78ce42bbcdeb45d16589e4eb84 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.s0J 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 39e9ef116b81de5ff22389512e445f0aa2533c78ce42bbcdeb45d16589e4eb84 3 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 39e9ef116b81de5ff22389512e445f0aa2533c78ce42bbcdeb45d16589e4eb84 3 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=39e9ef116b81de5ff22389512e445f0aa2533c78ce42bbcdeb45d16589e4eb84 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.s0J 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.s0J 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.s0J 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f25a22570995a8758e2ecc4bd4a27f2 00:18:38.685 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oga 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f25a22570995a8758e2ecc4bd4a27f2 1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f25a22570995a8758e2ecc4bd4a27f2 1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f25a22570995a8758e2ecc4bd4a27f2 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oga 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oga 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oga 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fde512c9160445371aa91a70cdaf556b1c8d9cf1ca6fd5d1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.doy 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fde512c9160445371aa91a70cdaf556b1c8d9cf1ca6fd5d1 2 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fde512c9160445371aa91a70cdaf556b1c8d9cf1ca6fd5d1 2 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fde512c9160445371aa91a70cdaf556b1c8d9cf1ca6fd5d1 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.doy 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.doy 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.doy 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b979f3f7b005a4358f96ad5edbb9b8a1ec4811a4db3c8f3f 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:38.943 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.64I 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b979f3f7b005a4358f96ad5edbb9b8a1ec4811a4db3c8f3f 2 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b979f3f7b005a4358f96ad5edbb9b8a1ec4811a4db3c8f3f 2 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b979f3f7b005a4358f96ad5edbb9b8a1ec4811a4db3c8f3f 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.64I 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.64I 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.64I 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d88faf34de077cce95f05faf73d0ae1d 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jWX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d88faf34de077cce95f05faf73d0ae1d 1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d88faf34de077cce95f05faf73d0ae1d 1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d88faf34de077cce95f05faf73d0ae1d 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jWX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jWX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.jWX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=21cb48832ad6061d955965c49b18b3b88de8c29f98660da43cf2e2e9ff9dfbc6 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8bD 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 21cb48832ad6061d955965c49b18b3b88de8c29f98660da43cf2e2e9ff9dfbc6 3 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 21cb48832ad6061d955965c49b18b3b88de8c29f98660da43cf2e2e9ff9dfbc6 3 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=21cb48832ad6061d955965c49b18b3b88de8c29f98660da43cf2e2e9ff9dfbc6 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:38.944 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8bD 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8bD 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8bD 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2640764 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2640764 ']' 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2640865 /var/tmp/host.sock 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2640865 ']' 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:39.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.202 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sIJ 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.sIJ 00:18:39.460 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.sIJ 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.s0J ]] 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s0J 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s0J 00:18:39.718 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s0J 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oga 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oga 00:18:39.975 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oga 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.doy ]] 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.doy 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.doy 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.doy 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.64I 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.64I 00:18:40.234 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.64I 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.jWX ]] 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jWX 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jWX 00:18:40.492 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jWX 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8bD 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8bD 00:18:40.750 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8bD 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.008 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.266 00:18:41.266 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.266 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.266 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.524 { 00:18:41.524 "cntlid": 1, 00:18:41.524 "qid": 0, 00:18:41.524 "state": "enabled", 00:18:41.524 "thread": "nvmf_tgt_poll_group_000", 00:18:41.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.524 "listen_address": { 00:18:41.524 "trtype": "TCP", 00:18:41.524 "adrfam": "IPv4", 00:18:41.524 "traddr": "10.0.0.2", 00:18:41.524 "trsvcid": "4420" 00:18:41.524 }, 00:18:41.524 "peer_address": { 00:18:41.524 "trtype": "TCP", 00:18:41.524 "adrfam": "IPv4", 00:18:41.524 "traddr": "10.0.0.1", 00:18:41.524 "trsvcid": "58132" 00:18:41.524 }, 00:18:41.524 "auth": { 00:18:41.524 "state": "completed", 00:18:41.524 "digest": "sha256", 00:18:41.524 "dhgroup": "null" 00:18:41.524 } 00:18:41.524 } 00:18:41.524 ]' 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.524 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:41.786 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.352 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.611 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.870 00:18:42.870 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.871 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.871 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.129 { 00:18:43.129 "cntlid": 3, 00:18:43.129 "qid": 0, 00:18:43.129 "state": "enabled", 00:18:43.129 "thread": "nvmf_tgt_poll_group_000", 00:18:43.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.129 "listen_address": { 00:18:43.129 "trtype": "TCP", 00:18:43.129 "adrfam": "IPv4", 00:18:43.129 "traddr": "10.0.0.2", 00:18:43.129 "trsvcid": "4420" 00:18:43.129 }, 00:18:43.129 "peer_address": { 00:18:43.129 "trtype": "TCP", 00:18:43.129 "adrfam": "IPv4", 00:18:43.129 "traddr": "10.0.0.1", 00:18:43.129 "trsvcid": "58164" 00:18:43.129 }, 00:18:43.129 "auth": { 00:18:43.129 "state": "completed", 00:18:43.129 "digest": "sha256", 00:18:43.129 "dhgroup": "null" 00:18:43.129 } 00:18:43.129 } 00:18:43.129 ]' 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.129 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.387 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:43.387 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.954 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.469 00:18:44.469 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.469 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.469 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.726 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.726 { 00:18:44.726 "cntlid": 5, 00:18:44.726 "qid": 0, 00:18:44.726 "state": "enabled", 00:18:44.726 "thread": "nvmf_tgt_poll_group_000", 00:18:44.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:44.727 "listen_address": { 00:18:44.727 "trtype": "TCP", 00:18:44.727 "adrfam": "IPv4", 00:18:44.727 "traddr": "10.0.0.2", 00:18:44.727 "trsvcid": "4420" 00:18:44.727 }, 00:18:44.727 "peer_address": { 00:18:44.727 "trtype": "TCP", 00:18:44.727 "adrfam": "IPv4", 00:18:44.727 "traddr": "10.0.0.1", 00:18:44.727 "trsvcid": "58176" 00:18:44.727 }, 00:18:44.727 "auth": { 00:18:44.727 "state": "completed", 00:18:44.727 "digest": "sha256", 00:18:44.727 "dhgroup": "null" 00:18:44.727 } 00:18:44.727 } 00:18:44.727 ]' 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.727 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.985 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:44.985 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.551 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.808 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:45.808 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.808 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.809 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.066 00:18:46.066 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.066 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.066 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.324 { 00:18:46.324 "cntlid": 7, 00:18:46.324 "qid": 0, 00:18:46.324 "state": "enabled", 00:18:46.324 "thread": "nvmf_tgt_poll_group_000", 00:18:46.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:46.324 "listen_address": { 00:18:46.324 "trtype": "TCP", 00:18:46.324 "adrfam": "IPv4", 00:18:46.324 "traddr": "10.0.0.2", 00:18:46.324 "trsvcid": "4420" 00:18:46.324 }, 00:18:46.324 "peer_address": { 00:18:46.324 "trtype": "TCP", 00:18:46.324 "adrfam": "IPv4", 00:18:46.324 "traddr": "10.0.0.1", 00:18:46.324 "trsvcid": "58198" 00:18:46.324 }, 00:18:46.324 "auth": { 00:18:46.324 "state": "completed", 00:18:46.324 "digest": "sha256", 00:18:46.324 "dhgroup": "null" 00:18:46.324 } 00:18:46.324 } 00:18:46.324 ]' 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.324 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.582 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:46.582 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.146 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.404 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.660 00:18:47.660 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.660 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.660 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.917 { 00:18:47.917 "cntlid": 9, 00:18:47.917 "qid": 0, 00:18:47.917 "state": "enabled", 00:18:47.917 "thread": "nvmf_tgt_poll_group_000", 00:18:47.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:47.917 "listen_address": { 00:18:47.917 "trtype": "TCP", 00:18:47.917 "adrfam": "IPv4", 00:18:47.917 "traddr": "10.0.0.2", 00:18:47.917 "trsvcid": "4420" 00:18:47.917 }, 00:18:47.917 "peer_address": { 00:18:47.917 "trtype": "TCP", 00:18:47.917 "adrfam": "IPv4", 00:18:47.917 "traddr": "10.0.0.1", 00:18:47.917 "trsvcid": "58234" 00:18:47.917 }, 00:18:47.917 "auth": { 00:18:47.917 "state": "completed", 00:18:47.917 "digest": "sha256", 00:18:47.917 "dhgroup": "ffdhe2048" 00:18:47.917 } 00:18:47.917 } 00:18:47.917 ]' 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.917 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.174 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:48.174 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.739 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.997 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.256 00:18:49.256 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.256 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.256 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.513 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.513 { 00:18:49.513 "cntlid": 11, 00:18:49.513 "qid": 0, 00:18:49.513 "state": "enabled", 00:18:49.513 "thread": "nvmf_tgt_poll_group_000", 00:18:49.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:49.513 "listen_address": { 00:18:49.513 "trtype": "TCP", 00:18:49.513 "adrfam": "IPv4", 00:18:49.513 "traddr": "10.0.0.2", 00:18:49.513 "trsvcid": "4420" 00:18:49.513 }, 00:18:49.513 "peer_address": { 00:18:49.513 "trtype": "TCP", 00:18:49.513 "adrfam": "IPv4", 00:18:49.513 "traddr": "10.0.0.1", 00:18:49.513 "trsvcid": "37302" 00:18:49.513 }, 00:18:49.514 "auth": { 00:18:49.514 "state": "completed", 00:18:49.514 "digest": "sha256", 00:18:49.514 "dhgroup": "ffdhe2048" 00:18:49.514 } 00:18:49.514 } 00:18:49.514 ]' 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.514 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.771 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:49.771 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.338 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.602 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.860 00:18:50.860 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.860 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.860 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.860 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.118 { 00:18:51.118 "cntlid": 13, 00:18:51.118 "qid": 0, 00:18:51.118 "state": "enabled", 00:18:51.118 "thread": "nvmf_tgt_poll_group_000", 00:18:51.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:51.118 "listen_address": { 00:18:51.118 "trtype": "TCP", 00:18:51.118 "adrfam": "IPv4", 00:18:51.118 "traddr": "10.0.0.2", 00:18:51.118 "trsvcid": "4420" 00:18:51.118 }, 00:18:51.118 "peer_address": { 00:18:51.118 "trtype": "TCP", 00:18:51.118 "adrfam": "IPv4", 00:18:51.118 "traddr": "10.0.0.1", 00:18:51.118 "trsvcid": "37324" 00:18:51.118 }, 00:18:51.118 "auth": { 00:18:51.118 "state": "completed", 00:18:51.118 "digest": "sha256", 00:18:51.118 "dhgroup": "ffdhe2048" 00:18:51.118 } 00:18:51.118 } 00:18:51.118 ]' 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.118 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.376 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:51.376 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.941 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.199 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.456 00:18:52.456 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.456 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.456 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.456 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.456 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.712 { 00:18:52.712 "cntlid": 15, 00:18:52.712 "qid": 0, 00:18:52.712 "state": "enabled", 00:18:52.712 "thread": "nvmf_tgt_poll_group_000", 00:18:52.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:52.712 "listen_address": { 00:18:52.712 "trtype": "TCP", 00:18:52.712 "adrfam": "IPv4", 00:18:52.712 "traddr": "10.0.0.2", 00:18:52.712 "trsvcid": "4420" 00:18:52.712 }, 00:18:52.712 "peer_address": { 00:18:52.712 "trtype": "TCP", 00:18:52.712 "adrfam": "IPv4", 00:18:52.712 "traddr": "10.0.0.1", 00:18:52.712 "trsvcid": "37346" 00:18:52.712 }, 00:18:52.712 "auth": { 00:18:52.712 "state": "completed", 00:18:52.712 "digest": "sha256", 00:18:52.712 "dhgroup": "ffdhe2048" 00:18:52.712 } 00:18:52.712 } 00:18:52.712 ]' 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.712 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.969 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:52.969 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.533 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.791 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.049 00:18:54.049 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.049 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.049 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.306 { 00:18:54.306 "cntlid": 17, 00:18:54.306 "qid": 0, 00:18:54.306 "state": "enabled", 00:18:54.306 "thread": "nvmf_tgt_poll_group_000", 00:18:54.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:54.306 "listen_address": { 00:18:54.306 "trtype": "TCP", 00:18:54.306 "adrfam": "IPv4", 00:18:54.306 "traddr": "10.0.0.2", 00:18:54.306 "trsvcid": "4420" 00:18:54.306 }, 00:18:54.306 "peer_address": { 00:18:54.306 "trtype": "TCP", 00:18:54.306 "adrfam": "IPv4", 00:18:54.306 "traddr": "10.0.0.1", 00:18:54.306 "trsvcid": "37382" 00:18:54.306 }, 00:18:54.306 "auth": { 00:18:54.306 "state": "completed", 00:18:54.306 "digest": "sha256", 00:18:54.306 "dhgroup": "ffdhe3072" 00:18:54.306 } 00:18:54.306 } 00:18:54.306 ]' 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.306 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.564 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:54.564 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.130 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.388 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.647 00:18:55.647 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.647 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.647 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.906 { 00:18:55.906 "cntlid": 19, 00:18:55.906 "qid": 0, 00:18:55.906 "state": "enabled", 00:18:55.906 "thread": "nvmf_tgt_poll_group_000", 00:18:55.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:55.906 "listen_address": { 00:18:55.906 "trtype": "TCP", 00:18:55.906 "adrfam": "IPv4", 00:18:55.906 "traddr": "10.0.0.2", 00:18:55.906 "trsvcid": "4420" 00:18:55.906 }, 00:18:55.906 "peer_address": { 00:18:55.906 "trtype": "TCP", 00:18:55.906 "adrfam": "IPv4", 00:18:55.906 "traddr": "10.0.0.1", 00:18:55.906 "trsvcid": "37420" 00:18:55.906 }, 00:18:55.906 "auth": { 00:18:55.906 "state": "completed", 00:18:55.906 "digest": "sha256", 00:18:55.906 "dhgroup": "ffdhe3072" 00:18:55.906 } 00:18:55.906 } 00:18:55.906 ]' 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.906 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.165 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:56.165 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.731 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.990 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.248 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.248 { 00:18:57.248 "cntlid": 21, 00:18:57.248 "qid": 0, 00:18:57.248 "state": "enabled", 00:18:57.248 "thread": "nvmf_tgt_poll_group_000", 00:18:57.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:57.248 "listen_address": { 00:18:57.248 "trtype": "TCP", 00:18:57.248 "adrfam": "IPv4", 00:18:57.248 "traddr": "10.0.0.2", 00:18:57.248 "trsvcid": "4420" 00:18:57.248 }, 00:18:57.248 "peer_address": { 00:18:57.248 "trtype": "TCP", 00:18:57.248 "adrfam": "IPv4", 00:18:57.248 "traddr": "10.0.0.1", 00:18:57.248 "trsvcid": "37442" 00:18:57.248 }, 00:18:57.248 "auth": { 00:18:57.248 "state": "completed", 00:18:57.248 "digest": "sha256", 00:18:57.248 "dhgroup": "ffdhe3072" 00:18:57.248 } 00:18:57.248 } 00:18:57.248 ]' 00:18:57.248 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.506 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.771 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:57.771 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.399 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.399 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.678 00:18:58.678 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.678 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.678 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.956 { 00:18:58.956 "cntlid": 23, 00:18:58.956 "qid": 0, 00:18:58.956 "state": "enabled", 00:18:58.956 "thread": "nvmf_tgt_poll_group_000", 00:18:58.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:58.956 "listen_address": { 00:18:58.956 "trtype": "TCP", 00:18:58.956 "adrfam": "IPv4", 00:18:58.956 "traddr": "10.0.0.2", 00:18:58.956 "trsvcid": "4420" 00:18:58.956 }, 00:18:58.956 "peer_address": { 00:18:58.956 "trtype": "TCP", 00:18:58.956 "adrfam": "IPv4", 00:18:58.956 "traddr": "10.0.0.1", 00:18:58.956 "trsvcid": "37460" 00:18:58.956 }, 00:18:58.956 "auth": { 00:18:58.956 "state": "completed", 00:18:58.956 "digest": "sha256", 00:18:58.956 "dhgroup": "ffdhe3072" 00:18:58.956 } 00:18:58.956 } 00:18:58.956 ]' 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.956 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.215 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.215 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.215 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.215 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:59.215 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.782 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.041 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.299 00:19:00.299 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.299 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.300 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.557 { 00:19:00.557 "cntlid": 25, 00:19:00.557 "qid": 0, 00:19:00.557 "state": "enabled", 00:19:00.557 "thread": "nvmf_tgt_poll_group_000", 00:19:00.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:00.557 "listen_address": { 00:19:00.557 "trtype": "TCP", 00:19:00.557 "adrfam": "IPv4", 00:19:00.557 "traddr": "10.0.0.2", 00:19:00.557 "trsvcid": "4420" 00:19:00.557 }, 00:19:00.557 "peer_address": { 00:19:00.557 "trtype": "TCP", 00:19:00.557 "adrfam": "IPv4", 00:19:00.557 "traddr": "10.0.0.1", 00:19:00.557 "trsvcid": "55324" 00:19:00.557 }, 00:19:00.557 "auth": { 00:19:00.557 "state": "completed", 00:19:00.557 "digest": "sha256", 00:19:00.557 "dhgroup": "ffdhe4096" 00:19:00.557 } 00:19:00.557 } 00:19:00.557 ]' 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.557 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.814 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.814 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.814 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.814 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:00.814 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.379 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.895 00:19:01.895 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.895 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.895 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.152 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.152 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.152 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.152 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.152 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.153 { 00:19:02.153 "cntlid": 27, 00:19:02.153 "qid": 0, 00:19:02.153 "state": "enabled", 00:19:02.153 "thread": "nvmf_tgt_poll_group_000", 00:19:02.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:02.153 "listen_address": { 00:19:02.153 "trtype": "TCP", 00:19:02.153 "adrfam": "IPv4", 00:19:02.153 "traddr": "10.0.0.2", 00:19:02.153 "trsvcid": "4420" 00:19:02.153 }, 00:19:02.153 "peer_address": { 00:19:02.153 "trtype": "TCP", 00:19:02.153 "adrfam": "IPv4", 00:19:02.153 "traddr": "10.0.0.1", 00:19:02.153 "trsvcid": "55346" 00:19:02.153 }, 00:19:02.153 "auth": { 00:19:02.153 "state": "completed", 00:19:02.153 "digest": "sha256", 00:19:02.153 "dhgroup": "ffdhe4096" 00:19:02.153 } 00:19:02.153 } 00:19:02.153 ]' 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.153 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.411 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.411 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.411 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.411 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:02.411 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.023 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.280 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.537 00:19:03.537 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.537 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.537 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.794 { 00:19:03.794 "cntlid": 29, 00:19:03.794 "qid": 0, 00:19:03.794 "state": "enabled", 00:19:03.794 "thread": "nvmf_tgt_poll_group_000", 00:19:03.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:03.794 "listen_address": { 00:19:03.794 "trtype": "TCP", 00:19:03.794 "adrfam": "IPv4", 00:19:03.794 "traddr": "10.0.0.2", 00:19:03.794 "trsvcid": "4420" 00:19:03.794 }, 00:19:03.794 "peer_address": { 00:19:03.794 "trtype": "TCP", 00:19:03.794 "adrfam": "IPv4", 00:19:03.794 "traddr": "10.0.0.1", 00:19:03.794 "trsvcid": "55382" 00:19:03.794 }, 00:19:03.794 "auth": { 00:19:03.794 "state": "completed", 00:19:03.794 "digest": "sha256", 00:19:03.794 "dhgroup": "ffdhe4096" 00:19:03.794 } 00:19:03.794 } 00:19:03.794 ]' 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.794 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.052 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:04.052 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.617 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.875 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.133 00:19:05.133 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.133 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.133 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.391 { 00:19:05.391 "cntlid": 31, 00:19:05.391 "qid": 0, 00:19:05.391 "state": "enabled", 00:19:05.391 "thread": "nvmf_tgt_poll_group_000", 00:19:05.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:05.391 "listen_address": { 00:19:05.391 "trtype": "TCP", 00:19:05.391 "adrfam": "IPv4", 00:19:05.391 "traddr": "10.0.0.2", 00:19:05.391 "trsvcid": "4420" 00:19:05.391 }, 00:19:05.391 "peer_address": { 00:19:05.391 "trtype": "TCP", 00:19:05.391 "adrfam": "IPv4", 00:19:05.391 "traddr": "10.0.0.1", 00:19:05.391 "trsvcid": "55402" 00:19:05.391 }, 00:19:05.391 "auth": { 00:19:05.391 "state": "completed", 00:19:05.391 "digest": "sha256", 00:19:05.391 "dhgroup": "ffdhe4096" 00:19:05.391 } 00:19:05.391 } 00:19:05.391 ]' 00:19:05.391 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.391 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.391 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.391 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.391 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.650 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.650 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.650 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.650 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:05.650 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:06.216 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.474 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.732 00:19:06.732 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.732 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.732 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.990 { 00:19:06.990 "cntlid": 33, 00:19:06.990 "qid": 0, 00:19:06.990 "state": "enabled", 00:19:06.990 "thread": "nvmf_tgt_poll_group_000", 00:19:06.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:06.990 "listen_address": { 00:19:06.990 "trtype": "TCP", 00:19:06.990 "adrfam": "IPv4", 00:19:06.990 "traddr": "10.0.0.2", 00:19:06.990 "trsvcid": "4420" 00:19:06.990 }, 00:19:06.990 "peer_address": { 00:19:06.990 "trtype": "TCP", 00:19:06.990 "adrfam": "IPv4", 00:19:06.990 "traddr": "10.0.0.1", 00:19:06.990 "trsvcid": "55424" 00:19:06.990 }, 00:19:06.990 "auth": { 00:19:06.990 "state": "completed", 00:19:06.990 "digest": "sha256", 00:19:06.990 "dhgroup": "ffdhe6144" 00:19:06.990 } 00:19:06.990 } 00:19:06.990 ]' 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.990 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.249 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.249 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.249 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.249 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:07.249 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.814 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.073 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.331 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.590 { 00:19:08.590 "cntlid": 35, 00:19:08.590 "qid": 0, 00:19:08.590 "state": "enabled", 00:19:08.590 "thread": "nvmf_tgt_poll_group_000", 00:19:08.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:08.590 "listen_address": { 00:19:08.590 "trtype": "TCP", 00:19:08.590 "adrfam": "IPv4", 00:19:08.590 "traddr": "10.0.0.2", 00:19:08.590 "trsvcid": "4420" 00:19:08.590 }, 00:19:08.590 "peer_address": { 00:19:08.590 "trtype": "TCP", 00:19:08.590 "adrfam": "IPv4", 00:19:08.590 "traddr": "10.0.0.1", 00:19:08.590 "trsvcid": "55470" 00:19:08.590 }, 00:19:08.590 "auth": { 00:19:08.590 "state": "completed", 00:19:08.590 "digest": "sha256", 00:19:08.590 "dhgroup": "ffdhe6144" 00:19:08.590 } 00:19:08.590 } 00:19:08.590 ]' 00:19:08.590 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.850 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.108 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:09.108 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.674 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.240 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.240 { 00:19:10.240 "cntlid": 37, 00:19:10.240 "qid": 0, 00:19:10.240 "state": "enabled", 00:19:10.240 "thread": "nvmf_tgt_poll_group_000", 00:19:10.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:10.240 "listen_address": { 00:19:10.240 "trtype": "TCP", 00:19:10.240 "adrfam": "IPv4", 00:19:10.240 "traddr": "10.0.0.2", 00:19:10.240 "trsvcid": "4420" 00:19:10.240 }, 00:19:10.240 "peer_address": { 00:19:10.240 "trtype": "TCP", 00:19:10.240 "adrfam": "IPv4", 00:19:10.240 "traddr": "10.0.0.1", 00:19:10.240 "trsvcid": "44502" 00:19:10.240 }, 00:19:10.240 "auth": { 00:19:10.240 "state": "completed", 00:19:10.240 "digest": "sha256", 00:19:10.240 "dhgroup": "ffdhe6144" 00:19:10.240 } 00:19:10.240 } 00:19:10.240 ]' 00:19:10.240 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.498 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.498 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.498 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.498 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.498 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.498 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.498 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.756 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:10.756 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.320 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.320 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.320 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.320 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.320 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.886 00:19:11.886 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.886 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.887 { 00:19:11.887 "cntlid": 39, 00:19:11.887 "qid": 0, 00:19:11.887 "state": "enabled", 00:19:11.887 "thread": "nvmf_tgt_poll_group_000", 00:19:11.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:11.887 "listen_address": { 00:19:11.887 "trtype": "TCP", 00:19:11.887 "adrfam": "IPv4", 00:19:11.887 "traddr": "10.0.0.2", 00:19:11.887 "trsvcid": "4420" 00:19:11.887 }, 00:19:11.887 "peer_address": { 00:19:11.887 "trtype": "TCP", 00:19:11.887 "adrfam": "IPv4", 00:19:11.887 "traddr": "10.0.0.1", 00:19:11.887 "trsvcid": "44526" 00:19:11.887 }, 00:19:11.887 "auth": { 00:19:11.887 "state": "completed", 00:19:11.887 "digest": "sha256", 00:19:11.887 "dhgroup": "ffdhe6144" 00:19:11.887 } 00:19:11.887 } 00:19:11.887 ]' 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.887 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.144 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.144 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.144 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.144 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.144 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.402 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:12.402 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:12.966 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.966 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:12.966 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.966 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.967 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.224 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.224 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.224 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.224 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.481 00:19:13.481 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.481 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.481 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.738 { 00:19:13.738 "cntlid": 41, 00:19:13.738 "qid": 0, 00:19:13.738 "state": "enabled", 00:19:13.738 "thread": "nvmf_tgt_poll_group_000", 00:19:13.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:13.738 "listen_address": { 00:19:13.738 "trtype": "TCP", 00:19:13.738 "adrfam": "IPv4", 00:19:13.738 "traddr": "10.0.0.2", 00:19:13.738 "trsvcid": "4420" 00:19:13.738 }, 00:19:13.738 "peer_address": { 00:19:13.738 "trtype": "TCP", 00:19:13.738 "adrfam": "IPv4", 00:19:13.738 "traddr": "10.0.0.1", 00:19:13.738 "trsvcid": "44552" 00:19:13.738 }, 00:19:13.738 "auth": { 00:19:13.738 "state": "completed", 00:19:13.738 "digest": "sha256", 00:19:13.738 "dhgroup": "ffdhe8192" 00:19:13.738 } 00:19:13.738 } 00:19:13.738 ]' 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.738 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.995 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.995 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.995 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.995 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.995 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.252 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:14.252 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.818 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.384 00:19:15.384 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.384 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.384 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.641 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.641 { 00:19:15.641 "cntlid": 43, 00:19:15.641 "qid": 0, 00:19:15.641 "state": "enabled", 00:19:15.641 "thread": "nvmf_tgt_poll_group_000", 00:19:15.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:15.641 "listen_address": { 00:19:15.641 "trtype": "TCP", 00:19:15.641 "adrfam": "IPv4", 00:19:15.641 "traddr": "10.0.0.2", 00:19:15.641 "trsvcid": "4420" 00:19:15.641 }, 00:19:15.641 "peer_address": { 00:19:15.641 "trtype": "TCP", 00:19:15.641 "adrfam": "IPv4", 00:19:15.641 "traddr": "10.0.0.1", 00:19:15.641 "trsvcid": "44576" 00:19:15.641 }, 00:19:15.641 "auth": { 00:19:15.642 "state": "completed", 00:19:15.642 "digest": "sha256", 00:19:15.642 "dhgroup": "ffdhe8192" 00:19:15.642 } 00:19:15.642 } 00:19:15.642 ]' 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.642 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.899 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:15.899 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.465 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.722 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.286 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.286 { 00:19:17.286 "cntlid": 45, 00:19:17.286 "qid": 0, 00:19:17.286 "state": "enabled", 00:19:17.286 "thread": "nvmf_tgt_poll_group_000", 00:19:17.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:17.286 "listen_address": { 00:19:17.286 "trtype": "TCP", 00:19:17.286 "adrfam": "IPv4", 00:19:17.286 "traddr": "10.0.0.2", 00:19:17.286 "trsvcid": "4420" 00:19:17.286 }, 00:19:17.286 "peer_address": { 00:19:17.286 "trtype": "TCP", 00:19:17.286 "adrfam": "IPv4", 00:19:17.286 "traddr": "10.0.0.1", 00:19:17.286 "trsvcid": "44616" 00:19:17.286 }, 00:19:17.286 "auth": { 00:19:17.286 "state": "completed", 00:19:17.286 "digest": "sha256", 00:19:17.286 "dhgroup": "ffdhe8192" 00:19:17.286 } 00:19:17.286 } 00:19:17.286 ]' 00:19:17.286 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.547 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.808 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:17.808 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.374 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.375 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.375 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.941 00:19:18.941 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.941 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.941 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.199 { 00:19:19.199 "cntlid": 47, 00:19:19.199 "qid": 0, 00:19:19.199 "state": "enabled", 00:19:19.199 "thread": "nvmf_tgt_poll_group_000", 00:19:19.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:19.199 "listen_address": { 00:19:19.199 "trtype": "TCP", 00:19:19.199 "adrfam": "IPv4", 00:19:19.199 "traddr": "10.0.0.2", 00:19:19.199 "trsvcid": "4420" 00:19:19.199 }, 00:19:19.199 "peer_address": { 00:19:19.199 "trtype": "TCP", 00:19:19.199 "adrfam": "IPv4", 00:19:19.199 "traddr": "10.0.0.1", 00:19:19.199 "trsvcid": "44638" 00:19:19.199 }, 00:19:19.199 "auth": { 00:19:19.199 "state": "completed", 00:19:19.199 "digest": "sha256", 00:19:19.199 "dhgroup": "ffdhe8192" 00:19:19.199 } 00:19:19.199 } 00:19:19.199 ]' 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.199 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.457 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:19.457 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.023 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.282 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.538 00:19:20.538 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.538 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.538 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.795 { 00:19:20.795 "cntlid": 49, 00:19:20.795 "qid": 0, 00:19:20.795 "state": "enabled", 00:19:20.795 "thread": "nvmf_tgt_poll_group_000", 00:19:20.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:20.795 "listen_address": { 00:19:20.795 "trtype": "TCP", 00:19:20.795 "adrfam": "IPv4", 00:19:20.795 "traddr": "10.0.0.2", 00:19:20.795 "trsvcid": "4420" 00:19:20.795 }, 00:19:20.795 "peer_address": { 00:19:20.795 "trtype": "TCP", 00:19:20.795 "adrfam": "IPv4", 00:19:20.795 "traddr": "10.0.0.1", 00:19:20.795 "trsvcid": "40060" 00:19:20.795 }, 00:19:20.795 "auth": { 00:19:20.795 "state": "completed", 00:19:20.795 "digest": "sha384", 00:19:20.795 "dhgroup": "null" 00:19:20.795 } 00:19:20.795 } 00:19:20.795 ]' 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.795 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.053 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:21.053 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:21.617 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.875 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.132 00:19:22.132 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.132 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.132 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.390 { 00:19:22.390 "cntlid": 51, 00:19:22.390 "qid": 0, 00:19:22.390 "state": "enabled", 00:19:22.390 "thread": "nvmf_tgt_poll_group_000", 00:19:22.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:22.390 "listen_address": { 00:19:22.390 "trtype": "TCP", 00:19:22.390 "adrfam": "IPv4", 00:19:22.390 "traddr": "10.0.0.2", 00:19:22.390 "trsvcid": "4420" 00:19:22.390 }, 00:19:22.390 "peer_address": { 00:19:22.390 "trtype": "TCP", 00:19:22.390 "adrfam": "IPv4", 00:19:22.390 "traddr": "10.0.0.1", 00:19:22.390 "trsvcid": "40072" 00:19:22.390 }, 00:19:22.390 "auth": { 00:19:22.390 "state": "completed", 00:19:22.390 "digest": "sha384", 00:19:22.390 "dhgroup": "null" 00:19:22.390 } 00:19:22.390 } 00:19:22.390 ]' 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.390 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.390 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:22.390 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.390 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.390 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.390 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.681 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:22.681 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.246 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.503 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.761 00:19:23.761 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.761 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.761 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.019 { 00:19:24.019 "cntlid": 53, 00:19:24.019 "qid": 0, 00:19:24.019 "state": "enabled", 00:19:24.019 "thread": "nvmf_tgt_poll_group_000", 00:19:24.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:24.019 "listen_address": { 00:19:24.019 "trtype": "TCP", 00:19:24.019 "adrfam": "IPv4", 00:19:24.019 "traddr": "10.0.0.2", 00:19:24.019 "trsvcid": "4420" 00:19:24.019 }, 00:19:24.019 "peer_address": { 00:19:24.019 "trtype": "TCP", 00:19:24.019 "adrfam": "IPv4", 00:19:24.019 "traddr": "10.0.0.1", 00:19:24.019 "trsvcid": "40112" 00:19:24.019 }, 00:19:24.019 "auth": { 00:19:24.019 "state": "completed", 00:19:24.019 "digest": "sha384", 00:19:24.019 "dhgroup": "null" 00:19:24.019 } 00:19:24.019 } 00:19:24.019 ]' 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.019 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.275 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:24.275 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:24.838 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.838 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:24.838 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.839 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.839 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.839 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.839 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:24.839 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.096 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.354 00:19:25.354 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.354 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.354 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.354 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.354 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.354 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.354 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.612 { 00:19:25.612 "cntlid": 55, 00:19:25.612 "qid": 0, 00:19:25.612 "state": "enabled", 00:19:25.612 "thread": "nvmf_tgt_poll_group_000", 00:19:25.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:25.612 "listen_address": { 00:19:25.612 "trtype": "TCP", 00:19:25.612 "adrfam": "IPv4", 00:19:25.612 "traddr": "10.0.0.2", 00:19:25.612 "trsvcid": "4420" 00:19:25.612 }, 00:19:25.612 "peer_address": { 00:19:25.612 "trtype": "TCP", 00:19:25.612 "adrfam": "IPv4", 00:19:25.612 "traddr": "10.0.0.1", 00:19:25.612 "trsvcid": "40132" 00:19:25.612 }, 00:19:25.612 "auth": { 00:19:25.612 "state": "completed", 00:19:25.612 "digest": "sha384", 00:19:25.612 "dhgroup": "null" 00:19:25.612 } 00:19:25.612 } 00:19:25.612 ]' 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.612 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.871 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:25.871 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.437 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.696 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.953 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.953 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.953 { 00:19:26.953 "cntlid": 57, 00:19:26.953 "qid": 0, 00:19:26.953 "state": "enabled", 00:19:26.954 "thread": "nvmf_tgt_poll_group_000", 00:19:26.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:26.954 "listen_address": { 00:19:26.954 "trtype": "TCP", 00:19:26.954 "adrfam": "IPv4", 00:19:26.954 "traddr": "10.0.0.2", 00:19:26.954 "trsvcid": "4420" 00:19:26.954 }, 00:19:26.954 "peer_address": { 00:19:26.954 "trtype": "TCP", 00:19:26.954 "adrfam": "IPv4", 00:19:26.954 "traddr": "10.0.0.1", 00:19:26.954 "trsvcid": "40164" 00:19:26.954 }, 00:19:26.954 "auth": { 00:19:26.954 "state": "completed", 00:19:26.954 "digest": "sha384", 00:19:26.954 "dhgroup": "ffdhe2048" 00:19:26.954 } 00:19:26.954 } 00:19:26.954 ]' 00:19:26.954 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.211 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.468 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:27.468 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.034 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.292 00:19:28.292 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.292 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.292 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.551 { 00:19:28.551 "cntlid": 59, 00:19:28.551 "qid": 0, 00:19:28.551 "state": "enabled", 00:19:28.551 "thread": "nvmf_tgt_poll_group_000", 00:19:28.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:28.551 "listen_address": { 00:19:28.551 "trtype": "TCP", 00:19:28.551 "adrfam": "IPv4", 00:19:28.551 "traddr": "10.0.0.2", 00:19:28.551 "trsvcid": "4420" 00:19:28.551 }, 00:19:28.551 "peer_address": { 00:19:28.551 "trtype": "TCP", 00:19:28.551 "adrfam": "IPv4", 00:19:28.551 "traddr": "10.0.0.1", 00:19:28.551 "trsvcid": "40198" 00:19:28.551 }, 00:19:28.551 "auth": { 00:19:28.551 "state": "completed", 00:19:28.551 "digest": "sha384", 00:19:28.551 "dhgroup": "ffdhe2048" 00:19:28.551 } 00:19:28.551 } 00:19:28.551 ]' 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.551 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.809 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.809 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.809 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.809 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.809 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.067 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:29.067 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:29.631 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.632 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.890 00:19:29.890 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.890 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.890 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.149 { 00:19:30.149 "cntlid": 61, 00:19:30.149 "qid": 0, 00:19:30.149 "state": "enabled", 00:19:30.149 "thread": "nvmf_tgt_poll_group_000", 00:19:30.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:30.149 "listen_address": { 00:19:30.149 "trtype": "TCP", 00:19:30.149 "adrfam": "IPv4", 00:19:30.149 "traddr": "10.0.0.2", 00:19:30.149 "trsvcid": "4420" 00:19:30.149 }, 00:19:30.149 "peer_address": { 00:19:30.149 "trtype": "TCP", 00:19:30.149 "adrfam": "IPv4", 00:19:30.149 "traddr": "10.0.0.1", 00:19:30.149 "trsvcid": "36392" 00:19:30.149 }, 00:19:30.149 "auth": { 00:19:30.149 "state": "completed", 00:19:30.149 "digest": "sha384", 00:19:30.149 "dhgroup": "ffdhe2048" 00:19:30.149 } 00:19:30.149 } 00:19:30.149 ]' 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.149 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.407 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.407 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.407 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.407 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:30.407 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.970 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.226 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.227 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:31.227 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.227 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.495 00:19:31.495 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.495 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.495 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.758 { 00:19:31.758 "cntlid": 63, 00:19:31.758 "qid": 0, 00:19:31.758 "state": "enabled", 00:19:31.758 "thread": "nvmf_tgt_poll_group_000", 00:19:31.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:31.758 "listen_address": { 00:19:31.758 "trtype": "TCP", 00:19:31.758 "adrfam": "IPv4", 00:19:31.758 "traddr": "10.0.0.2", 00:19:31.758 "trsvcid": "4420" 00:19:31.758 }, 00:19:31.758 "peer_address": { 00:19:31.758 "trtype": "TCP", 00:19:31.758 "adrfam": "IPv4", 00:19:31.758 "traddr": "10.0.0.1", 00:19:31.758 "trsvcid": "36420" 00:19:31.758 }, 00:19:31.758 "auth": { 00:19:31.758 "state": "completed", 00:19:31.758 "digest": "sha384", 00:19:31.758 "dhgroup": "ffdhe2048" 00:19:31.758 } 00:19:31.758 } 00:19:31.758 ]' 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.758 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.015 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:32.015 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.580 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.838 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.095 00:19:33.095 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.095 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.095 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.351 { 00:19:33.351 "cntlid": 65, 00:19:33.351 "qid": 0, 00:19:33.351 "state": "enabled", 00:19:33.351 "thread": "nvmf_tgt_poll_group_000", 00:19:33.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:33.351 "listen_address": { 00:19:33.351 "trtype": "TCP", 00:19:33.351 "adrfam": "IPv4", 00:19:33.351 "traddr": "10.0.0.2", 00:19:33.351 "trsvcid": "4420" 00:19:33.351 }, 00:19:33.351 "peer_address": { 00:19:33.351 "trtype": "TCP", 00:19:33.351 "adrfam": "IPv4", 00:19:33.351 "traddr": "10.0.0.1", 00:19:33.351 "trsvcid": "36438" 00:19:33.351 }, 00:19:33.351 "auth": { 00:19:33.351 "state": "completed", 00:19:33.351 "digest": "sha384", 00:19:33.351 "dhgroup": "ffdhe3072" 00:19:33.351 } 00:19:33.351 } 00:19:33.351 ]' 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.351 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.351 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.351 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.351 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.351 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.351 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.609 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:33.609 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.174 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.432 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.690 00:19:34.690 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.690 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.690 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.948 { 00:19:34.948 "cntlid": 67, 00:19:34.948 "qid": 0, 00:19:34.948 "state": "enabled", 00:19:34.948 "thread": "nvmf_tgt_poll_group_000", 00:19:34.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:34.948 "listen_address": { 00:19:34.948 "trtype": "TCP", 00:19:34.948 "adrfam": "IPv4", 00:19:34.948 "traddr": "10.0.0.2", 00:19:34.948 "trsvcid": "4420" 00:19:34.948 }, 00:19:34.948 "peer_address": { 00:19:34.948 "trtype": "TCP", 00:19:34.948 "adrfam": "IPv4", 00:19:34.948 "traddr": "10.0.0.1", 00:19:34.948 "trsvcid": "36466" 00:19:34.948 }, 00:19:34.948 "auth": { 00:19:34.948 "state": "completed", 00:19:34.948 "digest": "sha384", 00:19:34.948 "dhgroup": "ffdhe3072" 00:19:34.948 } 00:19:34.948 } 00:19:34.948 ]' 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.948 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.206 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:35.206 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.786 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.075 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.363 00:19:36.363 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.363 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.363 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.635 { 00:19:36.635 "cntlid": 69, 00:19:36.635 "qid": 0, 00:19:36.635 "state": "enabled", 00:19:36.635 "thread": "nvmf_tgt_poll_group_000", 00:19:36.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:36.635 "listen_address": { 00:19:36.635 "trtype": "TCP", 00:19:36.635 "adrfam": "IPv4", 00:19:36.635 "traddr": "10.0.0.2", 00:19:36.635 "trsvcid": "4420" 00:19:36.635 }, 00:19:36.635 "peer_address": { 00:19:36.635 "trtype": "TCP", 00:19:36.635 "adrfam": "IPv4", 00:19:36.635 "traddr": "10.0.0.1", 00:19:36.635 "trsvcid": "36502" 00:19:36.635 }, 00:19:36.635 "auth": { 00:19:36.635 "state": "completed", 00:19:36.635 "digest": "sha384", 00:19:36.635 "dhgroup": "ffdhe3072" 00:19:36.635 } 00:19:36.635 } 00:19:36.635 ]' 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.635 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.893 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:36.893 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.459 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.716 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.973 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.973 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.231 { 00:19:38.231 "cntlid": 71, 00:19:38.231 "qid": 0, 00:19:38.231 "state": "enabled", 00:19:38.231 "thread": "nvmf_tgt_poll_group_000", 00:19:38.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:38.231 "listen_address": { 00:19:38.231 "trtype": "TCP", 00:19:38.231 "adrfam": "IPv4", 00:19:38.231 "traddr": "10.0.0.2", 00:19:38.231 "trsvcid": "4420" 00:19:38.231 }, 00:19:38.231 "peer_address": { 00:19:38.231 "trtype": "TCP", 00:19:38.231 "adrfam": "IPv4", 00:19:38.231 "traddr": "10.0.0.1", 00:19:38.231 "trsvcid": "36532" 00:19:38.231 }, 00:19:38.231 "auth": { 00:19:38.231 "state": "completed", 00:19:38.231 "digest": "sha384", 00:19:38.231 "dhgroup": "ffdhe3072" 00:19:38.231 } 00:19:38.231 } 00:19:38.231 ]' 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.231 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.488 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:38.488 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.053 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.312 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.570 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.570 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.570 { 00:19:39.570 "cntlid": 73, 00:19:39.570 "qid": 0, 00:19:39.570 "state": "enabled", 00:19:39.570 "thread": "nvmf_tgt_poll_group_000", 00:19:39.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:39.570 "listen_address": { 00:19:39.570 "trtype": "TCP", 00:19:39.570 "adrfam": "IPv4", 00:19:39.570 "traddr": "10.0.0.2", 00:19:39.570 "trsvcid": "4420" 00:19:39.570 }, 00:19:39.570 "peer_address": { 00:19:39.570 "trtype": "TCP", 00:19:39.570 "adrfam": "IPv4", 00:19:39.570 "traddr": "10.0.0.1", 00:19:39.570 "trsvcid": "33154" 00:19:39.570 }, 00:19:39.570 "auth": { 00:19:39.570 "state": "completed", 00:19:39.570 "digest": "sha384", 00:19:39.570 "dhgroup": "ffdhe4096" 00:19:39.571 } 00:19:39.571 } 00:19:39.571 ]' 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.829 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.087 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:40.087 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.653 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.911 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.168 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.168 { 00:19:41.168 "cntlid": 75, 00:19:41.168 "qid": 0, 00:19:41.168 "state": "enabled", 00:19:41.168 "thread": "nvmf_tgt_poll_group_000", 00:19:41.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:41.168 "listen_address": { 00:19:41.168 "trtype": "TCP", 00:19:41.168 "adrfam": "IPv4", 00:19:41.168 "traddr": "10.0.0.2", 00:19:41.168 "trsvcid": "4420" 00:19:41.168 }, 00:19:41.168 "peer_address": { 00:19:41.168 "trtype": "TCP", 00:19:41.168 "adrfam": "IPv4", 00:19:41.168 "traddr": "10.0.0.1", 00:19:41.168 "trsvcid": "33184" 00:19:41.168 }, 00:19:41.168 "auth": { 00:19:41.168 "state": "completed", 00:19:41.168 "digest": "sha384", 00:19:41.168 "dhgroup": "ffdhe4096" 00:19:41.168 } 00:19:41.168 } 00:19:41.168 ]' 00:19:41.168 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.426 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.426 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.426 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.426 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.426 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.426 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.426 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.683 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:41.683 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.248 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.249 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.249 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.505 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.762 { 00:19:42.762 "cntlid": 77, 00:19:42.762 "qid": 0, 00:19:42.762 "state": "enabled", 00:19:42.762 "thread": "nvmf_tgt_poll_group_000", 00:19:42.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:42.762 "listen_address": { 00:19:42.762 "trtype": "TCP", 00:19:42.762 "adrfam": "IPv4", 00:19:42.762 "traddr": "10.0.0.2", 00:19:42.762 "trsvcid": "4420" 00:19:42.762 }, 00:19:42.762 "peer_address": { 00:19:42.762 "trtype": "TCP", 00:19:42.762 "adrfam": "IPv4", 00:19:42.762 "traddr": "10.0.0.1", 00:19:42.762 "trsvcid": "33194" 00:19:42.762 }, 00:19:42.762 "auth": { 00:19:42.762 "state": "completed", 00:19:42.762 "digest": "sha384", 00:19:42.762 "dhgroup": "ffdhe4096" 00:19:42.762 } 00:19:42.762 } 00:19:42.762 ]' 00:19:42.762 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.018 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.018 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.019 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.019 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.019 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.019 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.019 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.275 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:43.275 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.837 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.095 00:19:44.352 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.352 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.352 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.352 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.352 { 00:19:44.352 "cntlid": 79, 00:19:44.352 "qid": 0, 00:19:44.352 "state": "enabled", 00:19:44.352 "thread": "nvmf_tgt_poll_group_000", 00:19:44.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:44.352 "listen_address": { 00:19:44.352 "trtype": "TCP", 00:19:44.352 "adrfam": "IPv4", 00:19:44.352 "traddr": "10.0.0.2", 00:19:44.352 "trsvcid": "4420" 00:19:44.352 }, 00:19:44.352 "peer_address": { 00:19:44.352 "trtype": "TCP", 00:19:44.352 "adrfam": "IPv4", 00:19:44.352 "traddr": "10.0.0.1", 00:19:44.352 "trsvcid": "33220" 00:19:44.352 }, 00:19:44.352 "auth": { 00:19:44.353 "state": "completed", 00:19:44.353 "digest": "sha384", 00:19:44.353 "dhgroup": "ffdhe4096" 00:19:44.353 } 00:19:44.353 } 00:19:44.353 ]' 00:19:44.353 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.353 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.353 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.609 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.609 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.609 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.609 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.609 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.866 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:44.866 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.427 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.428 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.428 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.991 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.992 { 00:19:45.992 "cntlid": 81, 00:19:45.992 "qid": 0, 00:19:45.992 "state": "enabled", 00:19:45.992 "thread": "nvmf_tgt_poll_group_000", 00:19:45.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:45.992 "listen_address": { 00:19:45.992 "trtype": "TCP", 00:19:45.992 "adrfam": "IPv4", 00:19:45.992 "traddr": "10.0.0.2", 00:19:45.992 "trsvcid": "4420" 00:19:45.992 }, 00:19:45.992 "peer_address": { 00:19:45.992 "trtype": "TCP", 00:19:45.992 "adrfam": "IPv4", 00:19:45.992 "traddr": "10.0.0.1", 00:19:45.992 "trsvcid": "33260" 00:19:45.992 }, 00:19:45.992 "auth": { 00:19:45.992 "state": "completed", 00:19:45.992 "digest": "sha384", 00:19:45.992 "dhgroup": "ffdhe6144" 00:19:45.992 } 00:19:45.992 } 00:19:45.992 ]' 00:19:45.992 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.249 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.250 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.508 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:46.508 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.074 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.331 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.332 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.332 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.589 00:19:47.589 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.589 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.589 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.846 { 00:19:47.846 "cntlid": 83, 00:19:47.846 "qid": 0, 00:19:47.846 "state": "enabled", 00:19:47.846 "thread": "nvmf_tgt_poll_group_000", 00:19:47.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:47.846 "listen_address": { 00:19:47.846 "trtype": "TCP", 00:19:47.846 "adrfam": "IPv4", 00:19:47.846 "traddr": "10.0.0.2", 00:19:47.846 "trsvcid": "4420" 00:19:47.846 }, 00:19:47.846 "peer_address": { 00:19:47.846 "trtype": "TCP", 00:19:47.846 "adrfam": "IPv4", 00:19:47.846 "traddr": "10.0.0.1", 00:19:47.846 "trsvcid": "33284" 00:19:47.846 }, 00:19:47.846 "auth": { 00:19:47.846 "state": "completed", 00:19:47.846 "digest": "sha384", 00:19:47.846 "dhgroup": "ffdhe6144" 00:19:47.846 } 00:19:47.846 } 00:19:47.846 ]' 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.846 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.103 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:48.103 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.668 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.926 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.183 00:19:49.183 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.183 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.183 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.441 { 00:19:49.441 "cntlid": 85, 00:19:49.441 "qid": 0, 00:19:49.441 "state": "enabled", 00:19:49.441 "thread": "nvmf_tgt_poll_group_000", 00:19:49.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:49.441 "listen_address": { 00:19:49.441 "trtype": "TCP", 00:19:49.441 "adrfam": "IPv4", 00:19:49.441 "traddr": "10.0.0.2", 00:19:49.441 "trsvcid": "4420" 00:19:49.441 }, 00:19:49.441 "peer_address": { 00:19:49.441 "trtype": "TCP", 00:19:49.441 "adrfam": "IPv4", 00:19:49.441 "traddr": "10.0.0.1", 00:19:49.441 "trsvcid": "35786" 00:19:49.441 }, 00:19:49.441 "auth": { 00:19:49.441 "state": "completed", 00:19:49.441 "digest": "sha384", 00:19:49.441 "dhgroup": "ffdhe6144" 00:19:49.441 } 00:19:49.441 } 00:19:49.441 ]' 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.441 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.698 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:49.698 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.262 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.520 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.778 00:19:50.778 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.778 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.778 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.035 { 00:19:51.035 "cntlid": 87, 00:19:51.035 "qid": 0, 00:19:51.035 "state": "enabled", 00:19:51.035 "thread": "nvmf_tgt_poll_group_000", 00:19:51.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:51.035 "listen_address": { 00:19:51.035 "trtype": "TCP", 00:19:51.035 "adrfam": "IPv4", 00:19:51.035 "traddr": "10.0.0.2", 00:19:51.035 "trsvcid": "4420" 00:19:51.035 }, 00:19:51.035 "peer_address": { 00:19:51.035 "trtype": "TCP", 00:19:51.035 "adrfam": "IPv4", 00:19:51.035 "traddr": "10.0.0.1", 00:19:51.035 "trsvcid": "35816" 00:19:51.035 }, 00:19:51.035 "auth": { 00:19:51.035 "state": "completed", 00:19:51.035 "digest": "sha384", 00:19:51.035 "dhgroup": "ffdhe6144" 00:19:51.035 } 00:19:51.035 } 00:19:51.035 ]' 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.035 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:51.292 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.855 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.112 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.676 00:19:52.676 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.676 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.676 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.935 { 00:19:52.935 "cntlid": 89, 00:19:52.935 "qid": 0, 00:19:52.935 "state": "enabled", 00:19:52.935 "thread": "nvmf_tgt_poll_group_000", 00:19:52.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:52.935 "listen_address": { 00:19:52.935 "trtype": "TCP", 00:19:52.935 "adrfam": "IPv4", 00:19:52.935 "traddr": "10.0.0.2", 00:19:52.935 "trsvcid": "4420" 00:19:52.935 }, 00:19:52.935 "peer_address": { 00:19:52.935 "trtype": "TCP", 00:19:52.935 "adrfam": "IPv4", 00:19:52.935 "traddr": "10.0.0.1", 00:19:52.935 "trsvcid": "35852" 00:19:52.935 }, 00:19:52.935 "auth": { 00:19:52.935 "state": "completed", 00:19:52.935 "digest": "sha384", 00:19:52.935 "dhgroup": "ffdhe8192" 00:19:52.935 } 00:19:52.935 } 00:19:52.935 ]' 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.935 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.193 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:53.193 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.759 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.017 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.583 00:19:54.583 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.583 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.583 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.841 { 00:19:54.841 "cntlid": 91, 00:19:54.841 "qid": 0, 00:19:54.841 "state": "enabled", 00:19:54.841 "thread": "nvmf_tgt_poll_group_000", 00:19:54.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:54.841 "listen_address": { 00:19:54.841 "trtype": "TCP", 00:19:54.841 "adrfam": "IPv4", 00:19:54.841 "traddr": "10.0.0.2", 00:19:54.841 "trsvcid": "4420" 00:19:54.841 }, 00:19:54.841 "peer_address": { 00:19:54.841 "trtype": "TCP", 00:19:54.841 "adrfam": "IPv4", 00:19:54.841 "traddr": "10.0.0.1", 00:19:54.841 "trsvcid": "35886" 00:19:54.841 }, 00:19:54.841 "auth": { 00:19:54.841 "state": "completed", 00:19:54.841 "digest": "sha384", 00:19:54.841 "dhgroup": "ffdhe8192" 00:19:54.841 } 00:19:54.841 } 00:19:54.841 ]' 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.841 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.099 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:55.099 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.665 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.923 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.924 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.182 00:19:56.440 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.440 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.440 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.440 { 00:19:56.440 "cntlid": 93, 00:19:56.440 "qid": 0, 00:19:56.440 "state": "enabled", 00:19:56.440 "thread": "nvmf_tgt_poll_group_000", 00:19:56.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.440 "listen_address": { 00:19:56.440 "trtype": "TCP", 00:19:56.440 "adrfam": "IPv4", 00:19:56.440 "traddr": "10.0.0.2", 00:19:56.440 "trsvcid": "4420" 00:19:56.440 }, 00:19:56.440 "peer_address": { 00:19:56.440 "trtype": "TCP", 00:19:56.440 "adrfam": "IPv4", 00:19:56.440 "traddr": "10.0.0.1", 00:19:56.440 "trsvcid": "35900" 00:19:56.440 }, 00:19:56.440 "auth": { 00:19:56.440 "state": "completed", 00:19:56.440 "digest": "sha384", 00:19:56.440 "dhgroup": "ffdhe8192" 00:19:56.440 } 00:19:56.440 } 00:19:56.440 ]' 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.440 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.697 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.697 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.697 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.697 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.697 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.955 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:56.955 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:19:57.517 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.517 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:57.517 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.518 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.080 00:19:58.080 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.080 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.080 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.337 { 00:19:58.337 "cntlid": 95, 00:19:58.337 "qid": 0, 00:19:58.337 "state": "enabled", 00:19:58.337 "thread": "nvmf_tgt_poll_group_000", 00:19:58.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:58.337 "listen_address": { 00:19:58.337 "trtype": "TCP", 00:19:58.337 "adrfam": "IPv4", 00:19:58.337 "traddr": "10.0.0.2", 00:19:58.337 "trsvcid": "4420" 00:19:58.337 }, 00:19:58.337 "peer_address": { 00:19:58.337 "trtype": "TCP", 00:19:58.337 "adrfam": "IPv4", 00:19:58.337 "traddr": "10.0.0.1", 00:19:58.337 "trsvcid": "35934" 00:19:58.337 }, 00:19:58.337 "auth": { 00:19:58.337 "state": "completed", 00:19:58.337 "digest": "sha384", 00:19:58.337 "dhgroup": "ffdhe8192" 00:19:58.337 } 00:19:58.337 } 00:19:58.337 ]' 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.337 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.338 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.338 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.338 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.595 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:58.595 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.159 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.415 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.672 00:19:59.672 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.672 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.672 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.929 { 00:19:59.929 "cntlid": 97, 00:19:59.929 "qid": 0, 00:19:59.929 "state": "enabled", 00:19:59.929 "thread": "nvmf_tgt_poll_group_000", 00:19:59.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:59.929 "listen_address": { 00:19:59.929 "trtype": "TCP", 00:19:59.929 "adrfam": "IPv4", 00:19:59.929 "traddr": "10.0.0.2", 00:19:59.929 "trsvcid": "4420" 00:19:59.929 }, 00:19:59.929 "peer_address": { 00:19:59.929 "trtype": "TCP", 00:19:59.929 "adrfam": "IPv4", 00:19:59.929 "traddr": "10.0.0.1", 00:19:59.929 "trsvcid": "48492" 00:19:59.929 }, 00:19:59.929 "auth": { 00:19:59.929 "state": "completed", 00:19:59.929 "digest": "sha512", 00:19:59.929 "dhgroup": "null" 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ]' 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.929 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.186 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:00.186 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:00.749 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.749 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:00.749 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.750 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.750 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.750 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.750 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:00.750 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.007 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.264 00:20:01.264 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.264 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.264 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.521 { 00:20:01.521 "cntlid": 99, 00:20:01.521 "qid": 0, 00:20:01.521 "state": "enabled", 00:20:01.521 "thread": "nvmf_tgt_poll_group_000", 00:20:01.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:01.521 "listen_address": { 00:20:01.521 "trtype": "TCP", 00:20:01.521 "adrfam": "IPv4", 00:20:01.521 "traddr": "10.0.0.2", 00:20:01.521 "trsvcid": "4420" 00:20:01.521 }, 00:20:01.521 "peer_address": { 00:20:01.521 "trtype": "TCP", 00:20:01.521 "adrfam": "IPv4", 00:20:01.521 "traddr": "10.0.0.1", 00:20:01.521 "trsvcid": "48522" 00:20:01.521 }, 00:20:01.521 "auth": { 00:20:01.521 "state": "completed", 00:20:01.521 "digest": "sha512", 00:20:01.521 "dhgroup": "null" 00:20:01.521 } 00:20:01.521 } 00:20:01.521 ]' 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.521 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.522 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.522 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.522 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.780 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:01.780 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.343 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:02.344 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.615 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.872 00:20:02.872 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.872 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.872 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.129 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.130 { 00:20:03.130 "cntlid": 101, 00:20:03.130 "qid": 0, 00:20:03.130 "state": "enabled", 00:20:03.130 "thread": "nvmf_tgt_poll_group_000", 00:20:03.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:03.130 "listen_address": { 00:20:03.130 "trtype": "TCP", 00:20:03.130 "adrfam": "IPv4", 00:20:03.130 "traddr": "10.0.0.2", 00:20:03.130 "trsvcid": "4420" 00:20:03.130 }, 00:20:03.130 "peer_address": { 00:20:03.130 "trtype": "TCP", 00:20:03.130 "adrfam": "IPv4", 00:20:03.130 "traddr": "10.0.0.1", 00:20:03.130 "trsvcid": "48548" 00:20:03.130 }, 00:20:03.130 "auth": { 00:20:03.130 "state": "completed", 00:20:03.130 "digest": "sha512", 00:20:03.130 "dhgroup": "null" 00:20:03.130 } 00:20:03.130 } 00:20:03.130 ]' 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.130 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.386 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:03.386 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:03.948 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.949 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:04.206 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.207 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.207 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.207 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.207 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.465 00:20:04.465 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.465 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.465 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.465 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.465 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.465 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.465 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.735 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.735 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.735 { 00:20:04.735 "cntlid": 103, 00:20:04.735 "qid": 0, 00:20:04.735 "state": "enabled", 00:20:04.735 "thread": "nvmf_tgt_poll_group_000", 00:20:04.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.735 "listen_address": { 00:20:04.735 "trtype": "TCP", 00:20:04.735 "adrfam": "IPv4", 00:20:04.735 "traddr": "10.0.0.2", 00:20:04.735 "trsvcid": "4420" 00:20:04.735 }, 00:20:04.735 "peer_address": { 00:20:04.735 "trtype": "TCP", 00:20:04.735 "adrfam": "IPv4", 00:20:04.735 "traddr": "10.0.0.1", 00:20:04.736 "trsvcid": "48570" 00:20:04.736 }, 00:20:04.736 "auth": { 00:20:04.736 "state": "completed", 00:20:04.736 "digest": "sha512", 00:20:04.736 "dhgroup": "null" 00:20:04.736 } 00:20:04.736 } 00:20:04.736 ]' 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.736 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.001 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:05.001 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.566 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.822 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.080 { 00:20:06.080 "cntlid": 105, 00:20:06.080 "qid": 0, 00:20:06.080 "state": "enabled", 00:20:06.080 "thread": "nvmf_tgt_poll_group_000", 00:20:06.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:06.080 "listen_address": { 00:20:06.080 "trtype": "TCP", 00:20:06.080 "adrfam": "IPv4", 00:20:06.080 "traddr": "10.0.0.2", 00:20:06.080 "trsvcid": "4420" 00:20:06.080 }, 00:20:06.080 "peer_address": { 00:20:06.080 "trtype": "TCP", 00:20:06.080 "adrfam": "IPv4", 00:20:06.080 "traddr": "10.0.0.1", 00:20:06.080 "trsvcid": "48598" 00:20:06.080 }, 00:20:06.080 "auth": { 00:20:06.080 "state": "completed", 00:20:06.080 "digest": "sha512", 00:20:06.080 "dhgroup": "ffdhe2048" 00:20:06.080 } 00:20:06.080 } 00:20:06.080 ]' 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.080 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.593 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:06.593 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.160 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.418 00:20:07.418 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.418 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.418 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.674 { 00:20:07.674 "cntlid": 107, 00:20:07.674 "qid": 0, 00:20:07.674 "state": "enabled", 00:20:07.674 "thread": "nvmf_tgt_poll_group_000", 00:20:07.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.674 "listen_address": { 00:20:07.674 "trtype": "TCP", 00:20:07.674 "adrfam": "IPv4", 00:20:07.674 "traddr": "10.0.0.2", 00:20:07.674 "trsvcid": "4420" 00:20:07.674 }, 00:20:07.674 "peer_address": { 00:20:07.674 "trtype": "TCP", 00:20:07.674 "adrfam": "IPv4", 00:20:07.674 "traddr": "10.0.0.1", 00:20:07.674 "trsvcid": "48634" 00:20:07.674 }, 00:20:07.674 "auth": { 00:20:07.674 "state": "completed", 00:20:07.674 "digest": "sha512", 00:20:07.674 "dhgroup": "ffdhe2048" 00:20:07.674 } 00:20:07.674 } 00:20:07.674 ]' 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.674 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:07.930 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:08.493 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.751 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.009 00:20:09.009 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.009 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.009 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.267 { 00:20:09.267 "cntlid": 109, 00:20:09.267 "qid": 0, 00:20:09.267 "state": "enabled", 00:20:09.267 "thread": "nvmf_tgt_poll_group_000", 00:20:09.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:09.267 "listen_address": { 00:20:09.267 "trtype": "TCP", 00:20:09.267 "adrfam": "IPv4", 00:20:09.267 "traddr": "10.0.0.2", 00:20:09.267 "trsvcid": "4420" 00:20:09.267 }, 00:20:09.267 "peer_address": { 00:20:09.267 "trtype": "TCP", 00:20:09.267 "adrfam": "IPv4", 00:20:09.267 "traddr": "10.0.0.1", 00:20:09.267 "trsvcid": "54286" 00:20:09.267 }, 00:20:09.267 "auth": { 00:20:09.267 "state": "completed", 00:20:09.267 "digest": "sha512", 00:20:09.267 "dhgroup": "ffdhe2048" 00:20:09.267 } 00:20:09.267 } 00:20:09.267 ]' 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.267 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.524 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.524 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.524 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.524 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:09.524 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.090 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.348 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.348 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.348 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.604 00:20:10.605 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.605 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.605 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.860 { 00:20:10.860 "cntlid": 111, 00:20:10.860 "qid": 0, 00:20:10.860 "state": "enabled", 00:20:10.860 "thread": "nvmf_tgt_poll_group_000", 00:20:10.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:10.860 "listen_address": { 00:20:10.860 "trtype": "TCP", 00:20:10.860 "adrfam": "IPv4", 00:20:10.860 "traddr": "10.0.0.2", 00:20:10.860 "trsvcid": "4420" 00:20:10.860 }, 00:20:10.860 "peer_address": { 00:20:10.860 "trtype": "TCP", 00:20:10.860 "adrfam": "IPv4", 00:20:10.860 "traddr": "10.0.0.1", 00:20:10.860 "trsvcid": "54302" 00:20:10.860 }, 00:20:10.860 "auth": { 00:20:10.860 "state": "completed", 00:20:10.860 "digest": "sha512", 00:20:10.860 "dhgroup": "ffdhe2048" 00:20:10.860 } 00:20:10.860 } 00:20:10.860 ]' 00:20:10.860 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.861 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.861 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.861 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.861 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.118 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.118 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.118 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.118 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:11.118 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.681 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.939 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.197 00:20:12.197 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.197 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.197 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.455 { 00:20:12.455 "cntlid": 113, 00:20:12.455 "qid": 0, 00:20:12.455 "state": "enabled", 00:20:12.455 "thread": "nvmf_tgt_poll_group_000", 00:20:12.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:12.455 "listen_address": { 00:20:12.455 "trtype": "TCP", 00:20:12.455 "adrfam": "IPv4", 00:20:12.455 "traddr": "10.0.0.2", 00:20:12.455 "trsvcid": "4420" 00:20:12.455 }, 00:20:12.455 "peer_address": { 00:20:12.455 "trtype": "TCP", 00:20:12.455 "adrfam": "IPv4", 00:20:12.455 "traddr": "10.0.0.1", 00:20:12.455 "trsvcid": "54332" 00:20:12.455 }, 00:20:12.455 "auth": { 00:20:12.455 "state": "completed", 00:20:12.455 "digest": "sha512", 00:20:12.455 "dhgroup": "ffdhe3072" 00:20:12.455 } 00:20:12.455 } 00:20:12.455 ]' 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.455 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.712 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:12.713 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.302 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.573 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.860 00:20:13.860 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.860 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.860 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.135 { 00:20:14.135 "cntlid": 115, 00:20:14.135 "qid": 0, 00:20:14.135 "state": "enabled", 00:20:14.135 "thread": "nvmf_tgt_poll_group_000", 00:20:14.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:14.135 "listen_address": { 00:20:14.135 "trtype": "TCP", 00:20:14.135 "adrfam": "IPv4", 00:20:14.135 "traddr": "10.0.0.2", 00:20:14.135 "trsvcid": "4420" 00:20:14.135 }, 00:20:14.135 "peer_address": { 00:20:14.135 "trtype": "TCP", 00:20:14.135 "adrfam": "IPv4", 00:20:14.135 "traddr": "10.0.0.1", 00:20:14.135 "trsvcid": "54366" 00:20:14.135 }, 00:20:14.135 "auth": { 00:20:14.135 "state": "completed", 00:20:14.135 "digest": "sha512", 00:20:14.135 "dhgroup": "ffdhe3072" 00:20:14.135 } 00:20:14.135 } 00:20:14.135 ]' 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.135 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.393 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:14.393 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.958 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.216 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.474 00:20:15.474 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.474 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.474 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.474 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.474 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.474 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.474 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.733 { 00:20:15.733 "cntlid": 117, 00:20:15.733 "qid": 0, 00:20:15.733 "state": "enabled", 00:20:15.733 "thread": "nvmf_tgt_poll_group_000", 00:20:15.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:15.733 "listen_address": { 00:20:15.733 "trtype": "TCP", 00:20:15.733 "adrfam": "IPv4", 00:20:15.733 "traddr": "10.0.0.2", 00:20:15.733 "trsvcid": "4420" 00:20:15.733 }, 00:20:15.733 "peer_address": { 00:20:15.733 "trtype": "TCP", 00:20:15.733 "adrfam": "IPv4", 00:20:15.733 "traddr": "10.0.0.1", 00:20:15.733 "trsvcid": "54384" 00:20:15.733 }, 00:20:15.733 "auth": { 00:20:15.733 "state": "completed", 00:20:15.733 "digest": "sha512", 00:20:15.733 "dhgroup": "ffdhe3072" 00:20:15.733 } 00:20:15.733 } 00:20:15.733 ]' 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.733 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.990 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:15.990 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.557 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.558 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.816 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.816 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.816 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.816 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.073 00:20:17.073 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.074 { 00:20:17.074 "cntlid": 119, 00:20:17.074 "qid": 0, 00:20:17.074 "state": "enabled", 00:20:17.074 "thread": "nvmf_tgt_poll_group_000", 00:20:17.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:17.074 "listen_address": { 00:20:17.074 "trtype": "TCP", 00:20:17.074 "adrfam": "IPv4", 00:20:17.074 "traddr": "10.0.0.2", 00:20:17.074 "trsvcid": "4420" 00:20:17.074 }, 00:20:17.074 "peer_address": { 00:20:17.074 "trtype": "TCP", 00:20:17.074 "adrfam": "IPv4", 00:20:17.074 "traddr": "10.0.0.1", 00:20:17.074 "trsvcid": "54400" 00:20:17.074 }, 00:20:17.074 "auth": { 00:20:17.074 "state": "completed", 00:20:17.074 "digest": "sha512", 00:20:17.074 "dhgroup": "ffdhe3072" 00:20:17.074 } 00:20:17.074 } 00:20:17.074 ]' 00:20:17.074 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.332 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:17.592 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.157 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.413 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.413 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.413 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.413 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.669 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.669 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.669 { 00:20:18.669 "cntlid": 121, 00:20:18.669 "qid": 0, 00:20:18.669 "state": "enabled", 00:20:18.669 "thread": "nvmf_tgt_poll_group_000", 00:20:18.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:18.670 "listen_address": { 00:20:18.670 "trtype": "TCP", 00:20:18.670 "adrfam": "IPv4", 00:20:18.670 "traddr": "10.0.0.2", 00:20:18.670 "trsvcid": "4420" 00:20:18.670 }, 00:20:18.670 "peer_address": { 00:20:18.670 "trtype": "TCP", 00:20:18.670 "adrfam": "IPv4", 00:20:18.670 "traddr": "10.0.0.1", 00:20:18.670 "trsvcid": "54424" 00:20:18.670 }, 00:20:18.670 "auth": { 00:20:18.670 "state": "completed", 00:20:18.670 "digest": "sha512", 00:20:18.670 "dhgroup": "ffdhe4096" 00:20:18.670 } 00:20:18.670 } 00:20:18.670 ]' 00:20:18.670 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.926 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.182 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:19.182 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.745 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.002 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.260 { 00:20:20.260 "cntlid": 123, 00:20:20.260 "qid": 0, 00:20:20.260 "state": "enabled", 00:20:20.260 "thread": "nvmf_tgt_poll_group_000", 00:20:20.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:20.260 "listen_address": { 00:20:20.260 "trtype": "TCP", 00:20:20.260 "adrfam": "IPv4", 00:20:20.260 "traddr": "10.0.0.2", 00:20:20.260 "trsvcid": "4420" 00:20:20.260 }, 00:20:20.260 "peer_address": { 00:20:20.260 "trtype": "TCP", 00:20:20.260 "adrfam": "IPv4", 00:20:20.260 "traddr": "10.0.0.1", 00:20:20.260 "trsvcid": "52232" 00:20:20.260 }, 00:20:20.260 "auth": { 00:20:20.260 "state": "completed", 00:20:20.260 "digest": "sha512", 00:20:20.260 "dhgroup": "ffdhe4096" 00:20:20.260 } 00:20:20.260 } 00:20:20.260 ]' 00:20:20.260 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.517 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.517 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.517 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.517 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.517 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.517 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.517 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.774 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:20.774 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.338 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.338 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.595 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.852 { 00:20:21.852 "cntlid": 125, 00:20:21.852 "qid": 0, 00:20:21.852 "state": "enabled", 00:20:21.852 "thread": "nvmf_tgt_poll_group_000", 00:20:21.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:21.852 "listen_address": { 00:20:21.852 "trtype": "TCP", 00:20:21.852 "adrfam": "IPv4", 00:20:21.852 "traddr": "10.0.0.2", 00:20:21.852 "trsvcid": "4420" 00:20:21.852 }, 00:20:21.852 "peer_address": { 00:20:21.852 "trtype": "TCP", 00:20:21.852 "adrfam": "IPv4", 00:20:21.852 "traddr": "10.0.0.1", 00:20:21.852 "trsvcid": "52264" 00:20:21.852 }, 00:20:21.852 "auth": { 00:20:21.852 "state": "completed", 00:20:21.852 "digest": "sha512", 00:20:21.852 "dhgroup": "ffdhe4096" 00:20:21.852 } 00:20:21.852 } 00:20:21.852 ]' 00:20:21.852 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.109 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.367 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:22.367 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.932 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.190 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.190 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.190 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.190 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.448 00:20:23.448 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.448 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.448 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.448 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.448 { 00:20:23.448 "cntlid": 127, 00:20:23.448 "qid": 0, 00:20:23.448 "state": "enabled", 00:20:23.448 "thread": "nvmf_tgt_poll_group_000", 00:20:23.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:23.448 "listen_address": { 00:20:23.448 "trtype": "TCP", 00:20:23.448 "adrfam": "IPv4", 00:20:23.448 "traddr": "10.0.0.2", 00:20:23.448 "trsvcid": "4420" 00:20:23.448 }, 00:20:23.448 "peer_address": { 00:20:23.448 "trtype": "TCP", 00:20:23.448 "adrfam": "IPv4", 00:20:23.448 "traddr": "10.0.0.1", 00:20:23.448 "trsvcid": "52286" 00:20:23.448 }, 00:20:23.448 "auth": { 00:20:23.448 "state": "completed", 00:20:23.448 "digest": "sha512", 00:20:23.448 "dhgroup": "ffdhe4096" 00:20:23.448 } 00:20:23.448 } 00:20:23.449 ]' 00:20:23.449 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.706 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.963 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:23.963 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.528 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.094 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.094 { 00:20:25.094 "cntlid": 129, 00:20:25.094 "qid": 0, 00:20:25.094 "state": "enabled", 00:20:25.094 "thread": "nvmf_tgt_poll_group_000", 00:20:25.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:25.094 "listen_address": { 00:20:25.094 "trtype": "TCP", 00:20:25.094 "adrfam": "IPv4", 00:20:25.094 "traddr": "10.0.0.2", 00:20:25.094 "trsvcid": "4420" 00:20:25.094 }, 00:20:25.094 "peer_address": { 00:20:25.094 "trtype": "TCP", 00:20:25.094 "adrfam": "IPv4", 00:20:25.094 "traddr": "10.0.0.1", 00:20:25.094 "trsvcid": "52320" 00:20:25.094 }, 00:20:25.094 "auth": { 00:20:25.094 "state": "completed", 00:20:25.094 "digest": "sha512", 00:20:25.094 "dhgroup": "ffdhe6144" 00:20:25.094 } 00:20:25.094 } 00:20:25.094 ]' 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.094 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.352 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.352 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.352 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.352 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.352 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.610 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:25.610 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.176 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.742 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.742 { 00:20:26.742 "cntlid": 131, 00:20:26.742 "qid": 0, 00:20:26.742 "state": "enabled", 00:20:26.742 "thread": "nvmf_tgt_poll_group_000", 00:20:26.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.742 "listen_address": { 00:20:26.742 "trtype": "TCP", 00:20:26.742 "adrfam": "IPv4", 00:20:26.742 "traddr": "10.0.0.2", 00:20:26.742 "trsvcid": "4420" 00:20:26.742 }, 00:20:26.742 "peer_address": { 00:20:26.742 "trtype": "TCP", 00:20:26.742 "adrfam": "IPv4", 00:20:26.742 "traddr": "10.0.0.1", 00:20:26.742 "trsvcid": "52346" 00:20:26.742 }, 00:20:26.742 "auth": { 00:20:26.742 "state": "completed", 00:20:26.742 "digest": "sha512", 00:20:26.742 "dhgroup": "ffdhe6144" 00:20:26.742 } 00:20:26.742 } 00:20:26.742 ]' 00:20:26.742 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.000 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.258 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:27.258 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.828 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.085 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.342 00:20:28.342 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.342 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.342 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.600 { 00:20:28.600 "cntlid": 133, 00:20:28.600 "qid": 0, 00:20:28.600 "state": "enabled", 00:20:28.600 "thread": "nvmf_tgt_poll_group_000", 00:20:28.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:28.600 "listen_address": { 00:20:28.600 "trtype": "TCP", 00:20:28.600 "adrfam": "IPv4", 00:20:28.600 "traddr": "10.0.0.2", 00:20:28.600 "trsvcid": "4420" 00:20:28.600 }, 00:20:28.600 "peer_address": { 00:20:28.600 "trtype": "TCP", 00:20:28.600 "adrfam": "IPv4", 00:20:28.600 "traddr": "10.0.0.1", 00:20:28.600 "trsvcid": "52360" 00:20:28.600 }, 00:20:28.600 "auth": { 00:20:28.600 "state": "completed", 00:20:28.600 "digest": "sha512", 00:20:28.600 "dhgroup": "ffdhe6144" 00:20:28.600 } 00:20:28.600 } 00:20:28.600 ]' 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.600 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.857 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:28.857 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:29.481 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.481 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:29.481 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.481 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.481 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.481 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.482 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.739 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.739 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.739 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.739 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.996 00:20:29.996 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.996 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.996 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.253 { 00:20:30.253 "cntlid": 135, 00:20:30.253 "qid": 0, 00:20:30.253 "state": "enabled", 00:20:30.253 "thread": "nvmf_tgt_poll_group_000", 00:20:30.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:30.253 "listen_address": { 00:20:30.253 "trtype": "TCP", 00:20:30.253 "adrfam": "IPv4", 00:20:30.253 "traddr": "10.0.0.2", 00:20:30.253 "trsvcid": "4420" 00:20:30.253 }, 00:20:30.253 "peer_address": { 00:20:30.253 "trtype": "TCP", 00:20:30.253 "adrfam": "IPv4", 00:20:30.253 "traddr": "10.0.0.1", 00:20:30.253 "trsvcid": "58164" 00:20:30.253 }, 00:20:30.253 "auth": { 00:20:30.253 "state": "completed", 00:20:30.253 "digest": "sha512", 00:20:30.253 "dhgroup": "ffdhe6144" 00:20:30.253 } 00:20:30.253 } 00:20:30.253 ]' 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.253 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.510 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:30.510 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:31.074 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.331 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.897 00:20:31.897 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.897 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.897 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.897 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.897 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.898 { 00:20:31.898 "cntlid": 137, 00:20:31.898 "qid": 0, 00:20:31.898 "state": "enabled", 00:20:31.898 "thread": "nvmf_tgt_poll_group_000", 00:20:31.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:31.898 "listen_address": { 00:20:31.898 "trtype": "TCP", 00:20:31.898 "adrfam": "IPv4", 00:20:31.898 "traddr": "10.0.0.2", 00:20:31.898 "trsvcid": "4420" 00:20:31.898 }, 00:20:31.898 "peer_address": { 00:20:31.898 "trtype": "TCP", 00:20:31.898 "adrfam": "IPv4", 00:20:31.898 "traddr": "10.0.0.1", 00:20:31.898 "trsvcid": "58196" 00:20:31.898 }, 00:20:31.898 "auth": { 00:20:31.898 "state": "completed", 00:20:31.898 "digest": "sha512", 00:20:31.898 "dhgroup": "ffdhe8192" 00:20:31.898 } 00:20:31.898 } 00:20:31.898 ]' 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.898 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.155 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.155 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.155 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.155 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.155 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.414 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:32.414 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.979 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.545 00:20:33.545 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.545 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.545 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.803 { 00:20:33.803 "cntlid": 139, 00:20:33.803 "qid": 0, 00:20:33.803 "state": "enabled", 00:20:33.803 "thread": "nvmf_tgt_poll_group_000", 00:20:33.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:33.803 "listen_address": { 00:20:33.803 "trtype": "TCP", 00:20:33.803 "adrfam": "IPv4", 00:20:33.803 "traddr": "10.0.0.2", 00:20:33.803 "trsvcid": "4420" 00:20:33.803 }, 00:20:33.803 "peer_address": { 00:20:33.803 "trtype": "TCP", 00:20:33.803 "adrfam": "IPv4", 00:20:33.803 "traddr": "10.0.0.1", 00:20:33.803 "trsvcid": "58232" 00:20:33.803 }, 00:20:33.803 "auth": { 00:20:33.803 "state": "completed", 00:20:33.803 "digest": "sha512", 00:20:33.803 "dhgroup": "ffdhe8192" 00:20:33.803 } 00:20:33.803 } 00:20:33.803 ]' 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.803 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.061 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:34.061 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: --dhchap-ctrl-secret DHHC-1:02:ZmRlNTEyYzkxNjA0NDUzNzFhYTkxYTcwY2RhZjU1NmIxYzhkOWNmMWNhNmZkNWQxFPCJYg==: 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.627 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.885 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.452 00:20:35.452 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.452 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.452 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.452 { 00:20:35.452 "cntlid": 141, 00:20:35.452 "qid": 0, 00:20:35.452 "state": "enabled", 00:20:35.452 "thread": "nvmf_tgt_poll_group_000", 00:20:35.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:35.452 "listen_address": { 00:20:35.452 "trtype": "TCP", 00:20:35.452 "adrfam": "IPv4", 00:20:35.452 "traddr": "10.0.0.2", 00:20:35.452 "trsvcid": "4420" 00:20:35.452 }, 00:20:35.452 "peer_address": { 00:20:35.452 "trtype": "TCP", 00:20:35.452 "adrfam": "IPv4", 00:20:35.452 "traddr": "10.0.0.1", 00:20:35.452 "trsvcid": "58264" 00:20:35.452 }, 00:20:35.452 "auth": { 00:20:35.452 "state": "completed", 00:20:35.452 "digest": "sha512", 00:20:35.452 "dhgroup": "ffdhe8192" 00:20:35.452 } 00:20:35.452 } 00:20:35.452 ]' 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.452 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.709 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.709 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.709 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.709 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.709 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.966 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:35.966 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:01:ZDg4ZmFmMzRkZTA3N2NjZTk1ZjA1ZmFmNzNkMGFlMWQZVIYd: 00:20:36.530 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.530 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.531 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.531 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.531 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.094 00:20:37.094 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.094 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.094 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.352 { 00:20:37.352 "cntlid": 143, 00:20:37.352 "qid": 0, 00:20:37.352 "state": "enabled", 00:20:37.352 "thread": "nvmf_tgt_poll_group_000", 00:20:37.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:37.352 "listen_address": { 00:20:37.352 "trtype": "TCP", 00:20:37.352 "adrfam": "IPv4", 00:20:37.352 "traddr": "10.0.0.2", 00:20:37.352 "trsvcid": "4420" 00:20:37.352 }, 00:20:37.352 "peer_address": { 00:20:37.352 "trtype": "TCP", 00:20:37.352 "adrfam": "IPv4", 00:20:37.352 "traddr": "10.0.0.1", 00:20:37.352 "trsvcid": "58296" 00:20:37.352 }, 00:20:37.352 "auth": { 00:20:37.352 "state": "completed", 00:20:37.352 "digest": "sha512", 00:20:37.352 "dhgroup": "ffdhe8192" 00:20:37.352 } 00:20:37.352 } 00:20:37.352 ]' 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.352 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.352 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.352 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.352 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.352 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.352 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.611 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:37.611 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.188 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.444 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.445 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.445 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.445 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.007 00:20:39.007 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.007 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.007 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.264 { 00:20:39.264 "cntlid": 145, 00:20:39.264 "qid": 0, 00:20:39.264 "state": "enabled", 00:20:39.264 "thread": "nvmf_tgt_poll_group_000", 00:20:39.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:39.264 "listen_address": { 00:20:39.264 "trtype": "TCP", 00:20:39.264 "adrfam": "IPv4", 00:20:39.264 "traddr": "10.0.0.2", 00:20:39.264 "trsvcid": "4420" 00:20:39.264 }, 00:20:39.264 "peer_address": { 00:20:39.264 "trtype": "TCP", 00:20:39.264 "adrfam": "IPv4", 00:20:39.264 "traddr": "10.0.0.1", 00:20:39.264 "trsvcid": "58316" 00:20:39.264 }, 00:20:39.264 "auth": { 00:20:39.264 "state": "completed", 00:20:39.264 "digest": "sha512", 00:20:39.264 "dhgroup": "ffdhe8192" 00:20:39.264 } 00:20:39.264 } 00:20:39.264 ]' 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.264 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.521 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:39.521 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjE4MjVjMGVmN2RiNDE1MGNjZWIzYTJmNWNhM2QyNGMyNTA3MGMwNzQ3NTEwYTk5sRzp8A==: --dhchap-ctrl-secret DHHC-1:03:MzllOWVmMTE2YjgxZGU1ZmYyMjM4OTUxMmU0NDVmMGFhMjUzM2M3OGNlNDJiYmNkZWI0NWQxNjU4OWU0ZWI4NNuLKkg=: 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:40.098 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:40.354 request: 00:20:40.354 { 00:20:40.354 "name": "nvme0", 00:20:40.354 "trtype": "tcp", 00:20:40.354 "traddr": "10.0.0.2", 00:20:40.354 "adrfam": "ipv4", 00:20:40.354 "trsvcid": "4420", 00:20:40.354 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:40.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.354 "prchk_reftag": false, 00:20:40.354 "prchk_guard": false, 00:20:40.354 "hdgst": false, 00:20:40.354 "ddgst": false, 00:20:40.354 "dhchap_key": "key2", 00:20:40.354 "allow_unrecognized_csi": false, 00:20:40.354 "method": "bdev_nvme_attach_controller", 00:20:40.354 "req_id": 1 00:20:40.354 } 00:20:40.354 Got JSON-RPC error response 00:20:40.354 response: 00:20:40.354 { 00:20:40.354 "code": -5, 00:20:40.354 "message": "Input/output error" 00:20:40.354 } 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:40.611 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:40.869 request: 00:20:40.869 { 00:20:40.869 "name": "nvme0", 00:20:40.869 "trtype": "tcp", 00:20:40.869 "traddr": "10.0.0.2", 00:20:40.869 "adrfam": "ipv4", 00:20:40.869 "trsvcid": "4420", 00:20:40.869 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:40.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.869 "prchk_reftag": false, 00:20:40.869 "prchk_guard": false, 00:20:40.869 "hdgst": false, 00:20:40.869 "ddgst": false, 00:20:40.869 "dhchap_key": "key1", 00:20:40.869 "dhchap_ctrlr_key": "ckey2", 00:20:40.869 "allow_unrecognized_csi": false, 00:20:40.869 "method": "bdev_nvme_attach_controller", 00:20:40.869 "req_id": 1 00:20:40.869 } 00:20:40.869 Got JSON-RPC error response 00:20:40.869 response: 00:20:40.869 { 00:20:40.869 "code": -5, 00:20:40.869 "message": "Input/output error" 00:20:40.869 } 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.869 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.434 request: 00:20:41.434 { 00:20:41.434 "name": "nvme0", 00:20:41.434 "trtype": "tcp", 00:20:41.434 "traddr": "10.0.0.2", 00:20:41.434 "adrfam": "ipv4", 00:20:41.434 "trsvcid": "4420", 00:20:41.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:41.434 "prchk_reftag": false, 00:20:41.434 "prchk_guard": false, 00:20:41.434 "hdgst": false, 00:20:41.434 "ddgst": false, 00:20:41.434 "dhchap_key": "key1", 00:20:41.434 "dhchap_ctrlr_key": "ckey1", 00:20:41.434 "allow_unrecognized_csi": false, 00:20:41.434 "method": "bdev_nvme_attach_controller", 00:20:41.434 "req_id": 1 00:20:41.434 } 00:20:41.434 Got JSON-RPC error response 00:20:41.434 response: 00:20:41.434 { 00:20:41.434 "code": -5, 00:20:41.434 "message": "Input/output error" 00:20:41.434 } 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2640764 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2640764 ']' 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2640764 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640764 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640764' 00:20:41.434 killing process with pid 2640764 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2640764 00:20:41.434 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2640764 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2662743 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2662743 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2662743 ']' 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.692 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2662743 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2662743 ']' 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.627 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 null0 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sIJ 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.s0J ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s0J 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oga 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.doy ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.doy 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.64I 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.jWX ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jWX 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8bD 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.886 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.819 nvme0n1 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.819 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.076 { 00:20:44.076 "cntlid": 1, 00:20:44.076 "qid": 0, 00:20:44.076 "state": "enabled", 00:20:44.076 "thread": "nvmf_tgt_poll_group_000", 00:20:44.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:44.076 "listen_address": { 00:20:44.076 "trtype": "TCP", 00:20:44.076 "adrfam": "IPv4", 00:20:44.076 "traddr": "10.0.0.2", 00:20:44.076 "trsvcid": "4420" 00:20:44.076 }, 00:20:44.076 "peer_address": { 00:20:44.076 "trtype": "TCP", 00:20:44.076 "adrfam": "IPv4", 00:20:44.076 "traddr": "10.0.0.1", 00:20:44.076 "trsvcid": "56142" 00:20:44.076 }, 00:20:44.076 "auth": { 00:20:44.076 "state": "completed", 00:20:44.076 "digest": "sha512", 00:20:44.076 "dhgroup": "ffdhe8192" 00:20:44.076 } 00:20:44.076 } 00:20:44.076 ]' 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.076 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.332 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:44.332 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:44.895 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.152 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.152 request: 00:20:45.152 { 00:20:45.152 "name": "nvme0", 00:20:45.152 "trtype": "tcp", 00:20:45.152 "traddr": "10.0.0.2", 00:20:45.152 "adrfam": "ipv4", 00:20:45.152 "trsvcid": "4420", 00:20:45.152 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:45.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.152 "prchk_reftag": false, 00:20:45.152 "prchk_guard": false, 00:20:45.152 "hdgst": false, 00:20:45.152 "ddgst": false, 00:20:45.152 "dhchap_key": "key3", 00:20:45.152 "allow_unrecognized_csi": false, 00:20:45.152 "method": "bdev_nvme_attach_controller", 00:20:45.152 "req_id": 1 00:20:45.152 } 00:20:45.152 Got JSON-RPC error response 00:20:45.152 response: 00:20:45.152 { 00:20:45.152 "code": -5, 00:20:45.152 "message": "Input/output error" 00:20:45.152 } 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:45.437 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.437 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.695 request: 00:20:45.695 { 00:20:45.695 "name": "nvme0", 00:20:45.695 "trtype": "tcp", 00:20:45.695 "traddr": "10.0.0.2", 00:20:45.695 "adrfam": "ipv4", 00:20:45.695 "trsvcid": "4420", 00:20:45.695 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:45.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.695 "prchk_reftag": false, 00:20:45.695 "prchk_guard": false, 00:20:45.695 "hdgst": false, 00:20:45.695 "ddgst": false, 00:20:45.695 "dhchap_key": "key3", 00:20:45.695 "allow_unrecognized_csi": false, 00:20:45.695 "method": "bdev_nvme_attach_controller", 00:20:45.695 "req_id": 1 00:20:45.695 } 00:20:45.695 Got JSON-RPC error response 00:20:45.695 response: 00:20:45.695 { 00:20:45.695 "code": -5, 00:20:45.695 "message": "Input/output error" 00:20:45.695 } 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:45.695 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:45.952 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:46.209 request: 00:20:46.209 { 00:20:46.209 "name": "nvme0", 00:20:46.209 "trtype": "tcp", 00:20:46.209 "traddr": "10.0.0.2", 00:20:46.209 "adrfam": "ipv4", 00:20:46.209 "trsvcid": "4420", 00:20:46.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:46.209 "prchk_reftag": false, 00:20:46.209 "prchk_guard": false, 00:20:46.209 "hdgst": false, 00:20:46.209 "ddgst": false, 00:20:46.209 "dhchap_key": "key0", 00:20:46.209 "dhchap_ctrlr_key": "key1", 00:20:46.209 "allow_unrecognized_csi": false, 00:20:46.209 "method": "bdev_nvme_attach_controller", 00:20:46.209 "req_id": 1 00:20:46.209 } 00:20:46.209 Got JSON-RPC error response 00:20:46.209 response: 00:20:46.209 { 00:20:46.209 "code": -5, 00:20:46.209 "message": "Input/output error" 00:20:46.209 } 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:46.209 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:46.466 nvme0n1 00:20:46.466 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:46.466 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:46.466 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.722 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.723 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.723 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:46.979 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:47.558 nvme0n1 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.821 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:48.078 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.078 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:48.078 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: --dhchap-ctrl-secret DHHC-1:03:MjFjYjQ4ODMyYWQ2MDYxZDk1NTk2NWM0OWIxOGIzYjg4ZGU4YzI5Zjk4NjYwZGE0M2NmMmUyZTlmZjlkZmJjNnenmK0=: 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.661 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:48.919 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:49.176 request: 00:20:49.176 { 00:20:49.176 "name": "nvme0", 00:20:49.176 "trtype": "tcp", 00:20:49.176 "traddr": "10.0.0.2", 00:20:49.176 "adrfam": "ipv4", 00:20:49.176 "trsvcid": "4420", 00:20:49.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:49.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:49.176 "prchk_reftag": false, 00:20:49.176 "prchk_guard": false, 00:20:49.176 "hdgst": false, 00:20:49.176 "ddgst": false, 00:20:49.176 "dhchap_key": "key1", 00:20:49.176 "allow_unrecognized_csi": false, 00:20:49.176 "method": "bdev_nvme_attach_controller", 00:20:49.176 "req_id": 1 00:20:49.176 } 00:20:49.176 Got JSON-RPC error response 00:20:49.176 response: 00:20:49.176 { 00:20:49.176 "code": -5, 00:20:49.176 "message": "Input/output error" 00:20:49.176 } 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:49.176 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:50.108 nvme0n1 00:20:50.108 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:50.108 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:50.108 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.366 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.366 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.366 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:50.366 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:50.623 nvme0n1 00:20:50.623 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:50.623 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:50.623 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.881 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.881 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.881 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: '' 2s 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: ]] 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGYyNWEyMjU3MDk5NWE4NzU4ZTJlY2M0YmQ0YTI3ZjI/bZq1: 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:51.141 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: 2s 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: ]] 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yjk3OWYzZjdiMDA1YTQzNThmOTZhZDVlZGJiOWI4YTFlYzQ4MTFhNGRiM2M4ZjNmBJZGrQ==: 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:53.147 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:55.672 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:55.672 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:55.672 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:55.672 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:55.673 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:55.931 nvme0n1 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.931 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:56.496 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:56.496 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.496 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:56.755 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.020 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.584 request: 00:20:57.584 { 00:20:57.584 "name": "nvme0", 00:20:57.584 "dhchap_key": "key1", 00:20:57.584 "dhchap_ctrlr_key": "key3", 00:20:57.584 "method": "bdev_nvme_set_keys", 00:20:57.584 "req_id": 1 00:20:57.584 } 00:20:57.584 Got JSON-RPC error response 00:20:57.584 response: 00:20:57.584 { 00:20:57.584 "code": -13, 00:20:57.584 "message": "Permission denied" 00:20:57.584 } 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:57.584 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.839 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:57.839 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:58.791 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:58.791 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:58.791 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:59.049 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:59.615 nvme0n1 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:59.873 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:00.131 request: 00:21:00.131 { 00:21:00.131 "name": "nvme0", 00:21:00.131 "dhchap_key": "key2", 00:21:00.131 "dhchap_ctrlr_key": "key0", 00:21:00.131 "method": "bdev_nvme_set_keys", 00:21:00.131 "req_id": 1 00:21:00.131 } 00:21:00.131 Got JSON-RPC error response 00:21:00.131 response: 00:21:00.131 { 00:21:00.131 "code": -13, 00:21:00.131 "message": "Permission denied" 00:21:00.131 } 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:00.131 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.389 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:00.389 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:01.321 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:01.321 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:01.321 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2640865 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2640865 ']' 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2640865 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640865 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640865' 00:21:01.578 killing process with pid 2640865 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2640865 00:21:01.578 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2640865 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.836 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.836 rmmod nvme_tcp 00:21:02.095 rmmod nvme_fabrics 00:21:02.095 rmmod nvme_keyring 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2662743 ']' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2662743 ']' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662743' 00:21:02.095 killing process with pid 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2662743 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.095 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.sIJ /tmp/spdk.key-sha256.oga /tmp/spdk.key-sha384.64I /tmp/spdk.key-sha512.8bD /tmp/spdk.key-sha512.s0J /tmp/spdk.key-sha384.doy /tmp/spdk.key-sha256.jWX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:04.651 00:21:04.651 real 2m32.092s 00:21:04.651 user 5m50.150s 00:21:04.651 sys 0m24.399s 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.651 ************************************ 00:21:04.651 END TEST nvmf_auth_target 00:21:04.651 ************************************ 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.651 ************************************ 00:21:04.651 START TEST nvmf_bdevio_no_huge 00:21:04.651 ************************************ 00:21:04.651 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:04.651 * Looking for test storage... 00:21:04.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.651 --rc genhtml_branch_coverage=1 00:21:04.651 --rc genhtml_function_coverage=1 00:21:04.651 --rc genhtml_legend=1 00:21:04.651 --rc geninfo_all_blocks=1 00:21:04.651 --rc geninfo_unexecuted_blocks=1 00:21:04.651 00:21:04.651 ' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.651 --rc genhtml_branch_coverage=1 00:21:04.651 --rc genhtml_function_coverage=1 00:21:04.651 --rc genhtml_legend=1 00:21:04.651 --rc geninfo_all_blocks=1 00:21:04.651 --rc geninfo_unexecuted_blocks=1 00:21:04.651 00:21:04.651 ' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.651 --rc genhtml_branch_coverage=1 00:21:04.651 --rc genhtml_function_coverage=1 00:21:04.651 --rc genhtml_legend=1 00:21:04.651 --rc geninfo_all_blocks=1 00:21:04.651 --rc geninfo_unexecuted_blocks=1 00:21:04.651 00:21:04.651 ' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.651 --rc genhtml_branch_coverage=1 00:21:04.651 --rc genhtml_function_coverage=1 00:21:04.651 --rc genhtml_legend=1 00:21:04.651 --rc geninfo_all_blocks=1 00:21:04.651 --rc geninfo_unexecuted_blocks=1 00:21:04.651 00:21:04.651 ' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.651 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.217 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.217 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.217 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.217 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.218 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.218 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.218 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:21:11.218 00:21:11.218 --- 10.0.0.2 ping statistics --- 00:21:11.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.218 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:21:11.218 00:21:11.218 --- 10.0.0.1 ping statistics --- 00:21:11.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.218 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2669674 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2669674 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2669674 ']' 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.218 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.218 [2024-12-09 10:31:48.186676] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:11.218 [2024-12-09 10:31:48.186721] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:11.218 [2024-12-09 10:31:48.272802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.218 [2024-12-09 10:31:48.318925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.218 [2024-12-09 10:31:48.318960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.218 [2024-12-09 10:31:48.318967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.218 [2024-12-09 10:31:48.318973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.218 [2024-12-09 10:31:48.318978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.218 [2024-12-09 10:31:48.320078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.218 [2024-12-09 10:31:48.320187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:11.218 [2024-12-09 10:31:48.320202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:11.218 [2024-12-09 10:31:48.320207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 [2024-12-09 10:31:49.086089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 Malloc0 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.476 [2024-12-09 10:31:49.130354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.476 { 00:21:11.476 "params": { 00:21:11.476 "name": "Nvme$subsystem", 00:21:11.476 "trtype": "$TEST_TRANSPORT", 00:21:11.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.476 "adrfam": "ipv4", 00:21:11.476 "trsvcid": "$NVMF_PORT", 00:21:11.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.476 "hdgst": ${hdgst:-false}, 00:21:11.476 "ddgst": ${ddgst:-false} 00:21:11.476 }, 00:21:11.476 "method": "bdev_nvme_attach_controller" 00:21:11.476 } 00:21:11.476 EOF 00:21:11.476 )") 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:11.476 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:11.476 "params": { 00:21:11.476 "name": "Nvme1", 00:21:11.476 "trtype": "tcp", 00:21:11.476 "traddr": "10.0.0.2", 00:21:11.476 "adrfam": "ipv4", 00:21:11.476 "trsvcid": "4420", 00:21:11.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.476 "hdgst": false, 00:21:11.476 "ddgst": false 00:21:11.476 }, 00:21:11.476 "method": "bdev_nvme_attach_controller" 00:21:11.476 }' 00:21:11.476 [2024-12-09 10:31:49.181146] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:11.476 [2024-12-09 10:31:49.181191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2669921 ] 00:21:11.734 [2024-12-09 10:31:49.261045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.734 [2024-12-09 10:31:49.308929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.734 [2024-12-09 10:31:49.309036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.734 [2024-12-09 10:31:49.309037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.991 I/O targets: 00:21:11.991 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:11.991 00:21:11.991 00:21:11.991 CUnit - A unit testing framework for C - Version 2.1-3 00:21:11.991 http://cunit.sourceforge.net/ 00:21:11.991 00:21:11.991 00:21:11.991 Suite: bdevio tests on: Nvme1n1 00:21:11.991 Test: blockdev write read block ...passed 00:21:11.991 Test: blockdev write zeroes read block ...passed 00:21:11.991 Test: blockdev write zeroes read no split ...passed 00:21:11.991 Test: blockdev write zeroes read split ...passed 00:21:12.248 Test: blockdev write zeroes read split partial ...passed 00:21:12.248 Test: blockdev reset ...[2024-12-09 10:31:49.721096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:12.248 [2024-12-09 10:31:49.721171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5510 (9): Bad file descriptor 00:21:12.248 [2024-12-09 10:31:49.818686] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:12.248 passed 00:21:12.248 Test: blockdev write read 8 blocks ...passed 00:21:12.248 Test: blockdev write read size > 128k ...passed 00:21:12.248 Test: blockdev write read invalid size ...passed 00:21:12.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:12.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:12.248 Test: blockdev write read max offset ...passed 00:21:12.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:12.504 Test: blockdev writev readv 8 blocks ...passed 00:21:12.504 Test: blockdev writev readv 30 x 1block ...passed 00:21:12.504 Test: blockdev writev readv block ...passed 00:21:12.504 Test: blockdev writev readv size > 128k ...passed 00:21:12.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:12.504 Test: blockdev comparev and writev ...[2024-12-09 10:31:50.029858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.029888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.029903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.029931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.030837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.504 [2024-12-09 10:31:50.030845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:12.504 passed 00:21:12.504 Test: blockdev nvme passthru rw ...passed 00:21:12.504 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:31:50.113113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.504 [2024-12-09 10:31:50.113131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.113237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.504 [2024-12-09 10:31:50.113246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.113350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.504 [2024-12-09 10:31:50.113359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:12.504 [2024-12-09 10:31:50.113459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.504 [2024-12-09 10:31:50.113468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:12.504 passed 00:21:12.504 Test: blockdev nvme admin passthru ...passed 00:21:12.504 Test: blockdev copy ...passed 00:21:12.504 00:21:12.504 Run Summary: Type Total Ran Passed Failed Inactive 00:21:12.504 suites 1 1 n/a 0 0 00:21:12.504 tests 23 23 23 0 0 00:21:12.504 asserts 152 152 152 0 n/a 00:21:12.504 00:21:12.504 Elapsed time = 1.155 seconds 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.761 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.761 rmmod nvme_tcp 00:21:13.018 rmmod nvme_fabrics 00:21:13.018 rmmod nvme_keyring 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2669674 ']' 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2669674 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2669674 ']' 00:21:13.018 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2669674 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669674 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669674' 00:21:13.019 killing process with pid 2669674 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2669674 00:21:13.019 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2669674 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.277 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.810 00:21:15.810 real 0m11.004s 00:21:15.810 user 0m14.295s 00:21:15.810 sys 0m5.418s 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:15.810 ************************************ 00:21:15.810 END TEST nvmf_bdevio_no_huge 00:21:15.810 ************************************ 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.810 10:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.810 ************************************ 00:21:15.810 START TEST nvmf_tls 00:21:15.810 ************************************ 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:15.810 * Looking for test storage... 00:21:15.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.810 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:15.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.811 --rc genhtml_branch_coverage=1 00:21:15.811 --rc genhtml_function_coverage=1 00:21:15.811 --rc genhtml_legend=1 00:21:15.811 --rc geninfo_all_blocks=1 00:21:15.811 --rc geninfo_unexecuted_blocks=1 00:21:15.811 00:21:15.811 ' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:15.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.811 --rc genhtml_branch_coverage=1 00:21:15.811 --rc genhtml_function_coverage=1 00:21:15.811 --rc genhtml_legend=1 00:21:15.811 --rc geninfo_all_blocks=1 00:21:15.811 --rc geninfo_unexecuted_blocks=1 00:21:15.811 00:21:15.811 ' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:15.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.811 --rc genhtml_branch_coverage=1 00:21:15.811 --rc genhtml_function_coverage=1 00:21:15.811 --rc genhtml_legend=1 00:21:15.811 --rc geninfo_all_blocks=1 00:21:15.811 --rc geninfo_unexecuted_blocks=1 00:21:15.811 00:21:15.811 ' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:15.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.811 --rc genhtml_branch_coverage=1 00:21:15.811 --rc genhtml_function_coverage=1 00:21:15.811 --rc genhtml_legend=1 00:21:15.811 --rc geninfo_all_blocks=1 00:21:15.811 --rc geninfo_unexecuted_blocks=1 00:21:15.811 00:21:15.811 ' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.811 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.812 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.812 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.812 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.812 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:21.083 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.342 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.342 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.342 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.343 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.343 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.343 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.343 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.343 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:21:21.343 00:21:21.343 --- 10.0.0.2 ping statistics --- 00:21:21.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.343 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:21.343 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:21.601 00:21:21.601 --- 10.0.0.1 ping statistics --- 00:21:21.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.601 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2673673 00:21:21.601 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2673673 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2673673 ']' 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.602 [2024-12-09 10:31:59.160751] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:21.602 [2024-12-09 10:31:59.160792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.602 [2024-12-09 10:31:59.240399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.602 [2024-12-09 10:31:59.280909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.602 [2024-12-09 10:31:59.280943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.602 [2024-12-09 10:31:59.280950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.602 [2024-12-09 10:31:59.280956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.602 [2024-12-09 10:31:59.280961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.602 [2024-12-09 10:31:59.281487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.602 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.860 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.860 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:21.860 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:21.860 true 00:21:21.860 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:21.860 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:22.118 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:22.118 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:22.118 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:22.377 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.377 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:22.636 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:22.636 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:22.636 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:22.636 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.636 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:22.895 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:22.895 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:22.895 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.895 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:23.173 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:23.173 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:23.173 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:23.173 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:23.173 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:23.432 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:23.432 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:23.432 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:23.690 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:23.691 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.C73KmllEIT 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.21yExxzu4j 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.C73KmllEIT 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.21yExxzu4j 00:21:23.950 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:24.208 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:24.466 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.C73KmllEIT 00:21:24.466 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C73KmllEIT 00:21:24.466 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.466 [2024-12-09 10:32:02.161485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.466 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.725 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.983 [2024-12-09 10:32:02.518379] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.983 [2024-12-09 10:32:02.518589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.983 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:25.242 malloc0 00:21:25.242 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:25.242 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C73KmllEIT 00:21:25.500 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:25.759 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.C73KmllEIT 00:21:35.733 Initializing NVMe Controllers 00:21:35.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:35.733 Initialization complete. Launching workers. 00:21:35.733 ======================================================== 00:21:35.733 Latency(us) 00:21:35.733 Device Information : IOPS MiB/s Average min max 00:21:35.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16861.06 65.86 3795.82 852.93 4792.34 00:21:35.733 ======================================================== 00:21:35.733 Total : 16861.06 65.86 3795.82 852.93 4792.34 00:21:35.733 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C73KmllEIT 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C73KmllEIT 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2676027 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2676027 /var/tmp/bdevperf.sock 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2676027 ']' 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.733 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.733 [2024-12-09 10:32:13.409549] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:35.733 [2024-12-09 10:32:13.409593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676027 ] 00:21:35.996 [2024-12-09 10:32:13.484732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.996 [2024-12-09 10:32:13.524536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.996 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.996 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.996 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C73KmllEIT 00:21:36.254 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.512 [2024-12-09 10:32:13.980097] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.512 TLSTESTn1 00:21:36.512 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:36.512 Running I/O for 10 seconds... 00:21:38.823 5399.00 IOPS, 21.09 MiB/s [2024-12-09T09:32:17.479Z] 5537.00 IOPS, 21.63 MiB/s [2024-12-09T09:32:18.414Z] 5557.00 IOPS, 21.71 MiB/s [2024-12-09T09:32:19.351Z] 5577.00 IOPS, 21.79 MiB/s [2024-12-09T09:32:20.289Z] 5591.40 IOPS, 21.84 MiB/s [2024-12-09T09:32:21.226Z] 5592.50 IOPS, 21.85 MiB/s [2024-12-09T09:32:22.605Z] 5590.29 IOPS, 21.84 MiB/s [2024-12-09T09:32:23.538Z] 5594.88 IOPS, 21.85 MiB/s [2024-12-09T09:32:24.479Z] 5598.56 IOPS, 21.87 MiB/s [2024-12-09T09:32:24.479Z] 5603.50 IOPS, 21.89 MiB/s 00:21:46.755 Latency(us) 00:21:46.755 [2024-12-09T09:32:24.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.755 Verification LBA range: start 0x0 length 0x2000 00:21:46.755 TLSTESTn1 : 10.01 5608.84 21.91 0.00 0.00 22787.62 5180.46 27088.21 00:21:46.755 [2024-12-09T09:32:24.479Z] =================================================================================================================== 00:21:46.755 [2024-12-09T09:32:24.479Z] Total : 5608.84 21.91 0.00 0.00 22787.62 5180.46 27088.21 00:21:46.755 { 00:21:46.755 "results": [ 00:21:46.755 { 00:21:46.755 "job": "TLSTESTn1", 00:21:46.755 "core_mask": "0x4", 00:21:46.755 "workload": "verify", 00:21:46.755 "status": "finished", 00:21:46.755 "verify_range": { 00:21:46.755 "start": 0, 00:21:46.755 "length": 8192 00:21:46.755 }, 00:21:46.755 "queue_depth": 128, 00:21:46.755 "io_size": 4096, 00:21:46.755 "runtime": 10.013122, 00:21:46.755 "iops": 5608.840080046963, 00:21:46.755 "mibps": 21.909531562683448, 00:21:46.755 "io_failed": 0, 00:21:46.755 "io_timeout": 0, 00:21:46.755 "avg_latency_us": 22787.62011256552, 00:21:46.755 "min_latency_us": 5180.464761904762, 00:21:46.755 "max_latency_us": 27088.213333333333 00:21:46.755 } 00:21:46.755 ], 00:21:46.755 "core_count": 1 00:21:46.755 } 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2676027 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2676027 ']' 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2676027 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676027 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676027' 00:21:46.755 killing process with pid 2676027 00:21:46.755 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2676027 00:21:46.756 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.756 00:21:46.756 Latency(us) 00:21:46.756 [2024-12-09T09:32:24.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.756 [2024-12-09T09:32:24.480Z] =================================================================================================================== 00:21:46.756 [2024-12-09T09:32:24.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2676027 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21yExxzu4j 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21yExxzu4j 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21yExxzu4j 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.21yExxzu4j 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2677762 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2677762 /var/tmp/bdevperf.sock 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2677762 ']' 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.756 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.015 [2024-12-09 10:32:24.490147] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:47.015 [2024-12-09 10:32:24.490200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677762 ] 00:21:47.015 [2024-12-09 10:32:24.567295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.015 [2024-12-09 10:32:24.607042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.015 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.015 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.015 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21yExxzu4j 00:21:47.273 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.559 [2024-12-09 10:32:25.055095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.559 [2024-12-09 10:32:25.060471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.559 [2024-12-09 10:32:25.060491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207bdc0 (107): Transport endpoint is not connected 00:21:47.559 [2024-12-09 10:32:25.061473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207bdc0 (9): Bad file descriptor 00:21:47.559 [2024-12-09 10:32:25.062474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:47.559 [2024-12-09 10:32:25.062488] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:47.559 [2024-12-09 10:32:25.062496] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:47.559 [2024-12-09 10:32:25.062506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:47.559 request: 00:21:47.559 { 00:21:47.559 "name": "TLSTEST", 00:21:47.559 "trtype": "tcp", 00:21:47.559 "traddr": "10.0.0.2", 00:21:47.559 "adrfam": "ipv4", 00:21:47.559 "trsvcid": "4420", 00:21:47.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.559 "prchk_reftag": false, 00:21:47.559 "prchk_guard": false, 00:21:47.559 "hdgst": false, 00:21:47.559 "ddgst": false, 00:21:47.559 "psk": "key0", 00:21:47.559 "allow_unrecognized_csi": false, 00:21:47.559 "method": "bdev_nvme_attach_controller", 00:21:47.559 "req_id": 1 00:21:47.559 } 00:21:47.559 Got JSON-RPC error response 00:21:47.559 response: 00:21:47.559 { 00:21:47.559 "code": -5, 00:21:47.559 "message": "Input/output error" 00:21:47.559 } 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2677762 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2677762 ']' 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2677762 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2677762 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2677762' 00:21:47.559 killing process with pid 2677762 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2677762 00:21:47.559 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.559 00:21:47.559 Latency(us) 00:21:47.559 [2024-12-09T09:32:25.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.559 [2024-12-09T09:32:25.283Z] =================================================================================================================== 00:21:47.559 [2024-12-09T09:32:25.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.559 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2677762 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C73KmllEIT 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C73KmllEIT 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C73KmllEIT 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C73KmllEIT 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2677886 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2677886 /var/tmp/bdevperf.sock 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2677886 ']' 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.877 [2024-12-09 10:32:25.339186] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:47.877 [2024-12-09 10:32:25.339238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677886 ] 00:21:47.877 [2024-12-09 10:32:25.415055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.877 [2024-12-09 10:32:25.456383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.877 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C73KmllEIT 00:21:48.157 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:48.415 [2024-12-09 10:32:25.907940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.415 [2024-12-09 10:32:25.912689] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:48.415 [2024-12-09 10:32:25.912711] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:48.415 [2024-12-09 10:32:25.912734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.415 [2024-12-09 10:32:25.913375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1221dc0 (107): Transport endpoint is not connected 00:21:48.415 [2024-12-09 10:32:25.914366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1221dc0 (9): Bad file descriptor 00:21:48.415 [2024-12-09 10:32:25.915368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:48.415 [2024-12-09 10:32:25.915379] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:48.416 [2024-12-09 10:32:25.915386] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:48.416 [2024-12-09 10:32:25.915396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:48.416 request: 00:21:48.416 { 00:21:48.416 "name": "TLSTEST", 00:21:48.416 "trtype": "tcp", 00:21:48.416 "traddr": "10.0.0.2", 00:21:48.416 "adrfam": "ipv4", 00:21:48.416 "trsvcid": "4420", 00:21:48.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.416 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.416 "prchk_reftag": false, 00:21:48.416 "prchk_guard": false, 00:21:48.416 "hdgst": false, 00:21:48.416 "ddgst": false, 00:21:48.416 "psk": "key0", 00:21:48.416 "allow_unrecognized_csi": false, 00:21:48.416 "method": "bdev_nvme_attach_controller", 00:21:48.416 "req_id": 1 00:21:48.416 } 00:21:48.416 Got JSON-RPC error response 00:21:48.416 response: 00:21:48.416 { 00:21:48.416 "code": -5, 00:21:48.416 "message": "Input/output error" 00:21:48.416 } 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2677886 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2677886 ']' 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2677886 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2677886 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2677886' 00:21:48.416 killing process with pid 2677886 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2677886 00:21:48.416 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.416 00:21:48.416 Latency(us) 00:21:48.416 [2024-12-09T09:32:26.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.416 [2024-12-09T09:32:26.140Z] =================================================================================================================== 00:21:48.416 [2024-12-09T09:32:26.140Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.416 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2677886 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C73KmllEIT 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C73KmllEIT 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C73KmllEIT 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C73KmllEIT 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2678124 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2678124 /var/tmp/bdevperf.sock 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678124 ']' 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.416 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.674 [2024-12-09 10:32:26.179433] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:48.674 [2024-12-09 10:32:26.179479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678124 ] 00:21:48.674 [2024-12-09 10:32:26.254116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.674 [2024-12-09 10:32:26.293484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.674 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.674 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.674 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C73KmllEIT 00:21:48.932 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:49.191 [2024-12-09 10:32:26.757494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.191 [2024-12-09 10:32:26.764722] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:49.191 [2024-12-09 10:32:26.764743] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:49.191 [2024-12-09 10:32:26.764766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:49.191 [2024-12-09 10:32:26.765737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73dc0 (107): Transport endpoint is not connected 00:21:49.191 [2024-12-09 10:32:26.766731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73dc0 (9): Bad file descriptor 00:21:49.191 [2024-12-09 10:32:26.767733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:49.191 [2024-12-09 10:32:26.767742] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:49.191 [2024-12-09 10:32:26.767749] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:49.191 [2024-12-09 10:32:26.767758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:49.191 request: 00:21:49.191 { 00:21:49.191 "name": "TLSTEST", 00:21:49.191 "trtype": "tcp", 00:21:49.191 "traddr": "10.0.0.2", 00:21:49.191 "adrfam": "ipv4", 00:21:49.191 "trsvcid": "4420", 00:21:49.191 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.191 "prchk_reftag": false, 00:21:49.191 "prchk_guard": false, 00:21:49.191 "hdgst": false, 00:21:49.191 "ddgst": false, 00:21:49.191 "psk": "key0", 00:21:49.191 "allow_unrecognized_csi": false, 00:21:49.191 "method": "bdev_nvme_attach_controller", 00:21:49.191 "req_id": 1 00:21:49.191 } 00:21:49.191 Got JSON-RPC error response 00:21:49.191 response: 00:21:49.191 { 00:21:49.191 "code": -5, 00:21:49.191 "message": "Input/output error" 00:21:49.191 } 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2678124 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678124 ']' 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678124 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678124 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678124' 00:21:49.191 killing process with pid 2678124 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678124 00:21:49.191 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.191 00:21:49.191 Latency(us) 00:21:49.191 [2024-12-09T09:32:26.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.191 [2024-12-09T09:32:26.915Z] =================================================================================================================== 00:21:49.191 [2024-12-09T09:32:26.915Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.191 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678124 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.450 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2678147 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2678147 /var/tmp/bdevperf.sock 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678147 ']' 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.450 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.450 [2024-12-09 10:32:27.049075] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:49.450 [2024-12-09 10:32:27.049122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678147 ] 00:21:49.450 [2024-12-09 10:32:27.121608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.450 [2024-12-09 10:32:27.163152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.708 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.708 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:49.708 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:49.708 [2024-12-09 10:32:27.421948] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:49.708 [2024-12-09 10:32:27.421975] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:49.708 request: 00:21:49.708 { 00:21:49.708 "name": "key0", 00:21:49.708 "path": "", 00:21:49.708 "method": "keyring_file_add_key", 00:21:49.708 "req_id": 1 00:21:49.708 } 00:21:49.708 Got JSON-RPC error response 00:21:49.708 response: 00:21:49.708 { 00:21:49.708 "code": -1, 00:21:49.708 "message": "Operation not permitted" 00:21:49.708 } 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:49.966 [2024-12-09 10:32:27.610524] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.966 [2024-12-09 10:32:27.610559] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:49.966 request: 00:21:49.966 { 00:21:49.966 "name": "TLSTEST", 00:21:49.966 "trtype": "tcp", 00:21:49.966 "traddr": "10.0.0.2", 00:21:49.966 "adrfam": "ipv4", 00:21:49.966 "trsvcid": "4420", 00:21:49.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.966 "prchk_reftag": false, 00:21:49.966 "prchk_guard": false, 00:21:49.966 "hdgst": false, 00:21:49.966 "ddgst": false, 00:21:49.966 "psk": "key0", 00:21:49.966 "allow_unrecognized_csi": false, 00:21:49.966 "method": "bdev_nvme_attach_controller", 00:21:49.966 "req_id": 1 00:21:49.966 } 00:21:49.966 Got JSON-RPC error response 00:21:49.966 response: 00:21:49.966 { 00:21:49.966 "code": -126, 00:21:49.966 "message": "Required key not available" 00:21:49.966 } 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2678147 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678147 ']' 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678147 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678147 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678147' 00:21:49.966 killing process with pid 2678147 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678147 00:21:49.966 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.966 00:21:49.966 Latency(us) 00:21:49.966 [2024-12-09T09:32:27.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.966 [2024-12-09T09:32:27.690Z] =================================================================================================================== 00:21:49.966 [2024-12-09T09:32:27.690Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.966 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678147 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2673673 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2673673 ']' 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2673673 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2673673 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2673673' 00:21:50.225 killing process with pid 2673673 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2673673 00:21:50.225 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2673673 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VU79ajSvIn 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VU79ajSvIn 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2678389 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2678389 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678389 ']' 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.483 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.483 [2024-12-09 10:32:28.155934] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:50.484 [2024-12-09 10:32:28.155981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.742 [2024-12-09 10:32:28.235728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.742 [2024-12-09 10:32:28.272258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.742 [2024-12-09 10:32:28.272292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.742 [2024-12-09 10:32:28.272299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.742 [2024-12-09 10:32:28.272305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.742 [2024-12-09 10:32:28.272310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.742 [2024-12-09 10:32:28.272877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VU79ajSvIn 00:21:50.742 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:51.000 [2024-12-09 10:32:28.588610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.000 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.259 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:51.517 [2024-12-09 10:32:28.985620] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.517 [2024-12-09 10:32:28.985826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.517 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:51.517 malloc0 00:21:51.517 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.775 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:21:52.035 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VU79ajSvIn 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VU79ajSvIn 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2678644 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2678644 /var/tmp/bdevperf.sock 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678644 ']' 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.295 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.295 [2024-12-09 10:32:29.830627] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:52.296 [2024-12-09 10:32:29.830676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678644 ] 00:21:52.296 [2024-12-09 10:32:29.908164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.296 [2024-12-09 10:32:29.948510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.565 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.565 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.565 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:21:52.565 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:52.824 [2024-12-09 10:32:30.425888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.824 TLSTESTn1 00:21:52.824 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:53.082 Running I/O for 10 seconds... 00:21:54.950 5458.00 IOPS, 21.32 MiB/s [2024-12-09T09:32:34.047Z] 5309.00 IOPS, 20.74 MiB/s [2024-12-09T09:32:34.982Z] 5272.00 IOPS, 20.59 MiB/s [2024-12-09T09:32:35.914Z] 5200.75 IOPS, 20.32 MiB/s [2024-12-09T09:32:36.849Z] 5165.60 IOPS, 20.18 MiB/s [2024-12-09T09:32:37.785Z] 5161.50 IOPS, 20.16 MiB/s [2024-12-09T09:32:38.722Z] 5187.43 IOPS, 20.26 MiB/s [2024-12-09T09:32:39.657Z] 5189.62 IOPS, 20.27 MiB/s [2024-12-09T09:32:41.034Z] 5164.78 IOPS, 20.17 MiB/s [2024-12-09T09:32:41.034Z] 5137.50 IOPS, 20.07 MiB/s 00:22:03.311 Latency(us) 00:22:03.311 [2024-12-09T09:32:41.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.311 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:03.311 Verification LBA range: start 0x0 length 0x2000 00:22:03.311 TLSTESTn1 : 10.02 5141.71 20.08 0.00 0.00 24858.91 5898.24 36700.16 00:22:03.311 [2024-12-09T09:32:41.035Z] =================================================================================================================== 00:22:03.311 [2024-12-09T09:32:41.035Z] Total : 5141.71 20.08 0.00 0.00 24858.91 5898.24 36700.16 00:22:03.311 { 00:22:03.311 "results": [ 00:22:03.311 { 00:22:03.311 "job": "TLSTESTn1", 00:22:03.311 "core_mask": "0x4", 00:22:03.311 "workload": "verify", 00:22:03.311 "status": "finished", 00:22:03.311 "verify_range": { 00:22:03.311 "start": 0, 00:22:03.311 "length": 8192 00:22:03.311 }, 00:22:03.311 "queue_depth": 128, 00:22:03.311 "io_size": 4096, 00:22:03.311 "runtime": 10.016516, 00:22:03.311 "iops": 5141.707955141289, 00:22:03.311 "mibps": 20.08479669977066, 00:22:03.311 "io_failed": 0, 00:22:03.311 "io_timeout": 0, 00:22:03.311 "avg_latency_us": 24858.910015810758, 00:22:03.311 "min_latency_us": 5898.24, 00:22:03.311 "max_latency_us": 36700.16 00:22:03.311 } 00:22:03.311 ], 00:22:03.311 "core_count": 1 00:22:03.311 } 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2678644 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678644 ']' 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678644 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678644 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678644' 00:22:03.311 killing process with pid 2678644 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678644 00:22:03.311 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.311 00:22:03.311 Latency(us) 00:22:03.311 [2024-12-09T09:32:41.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.311 [2024-12-09T09:32:41.035Z] =================================================================================================================== 00:22:03.311 [2024-12-09T09:32:41.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678644 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VU79ajSvIn 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VU79ajSvIn 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VU79ajSvIn 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VU79ajSvIn 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VU79ajSvIn 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2680480 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2680480 /var/tmp/bdevperf.sock 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2680480 ']' 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.311 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.311 [2024-12-09 10:32:40.938834] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:03.311 [2024-12-09 10:32:40.938884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680480 ] 00:22:03.311 [2024-12-09 10:32:41.007216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.570 [2024-12-09 10:32:41.044260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.570 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.570 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:03.570 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:03.829 [2024-12-09 10:32:41.311233] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VU79ajSvIn': 0100666 00:22:03.829 [2024-12-09 10:32:41.311266] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:03.829 request: 00:22:03.829 { 00:22:03.829 "name": "key0", 00:22:03.829 "path": "/tmp/tmp.VU79ajSvIn", 00:22:03.829 "method": "keyring_file_add_key", 00:22:03.829 "req_id": 1 00:22:03.829 } 00:22:03.829 Got JSON-RPC error response 00:22:03.829 response: 00:22:03.829 { 00:22:03.829 "code": -1, 00:22:03.829 "message": "Operation not permitted" 00:22:03.829 } 00:22:03.829 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.829 [2024-12-09 10:32:41.519848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.829 [2024-12-09 10:32:41.519879] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:03.829 request: 00:22:03.829 { 00:22:03.829 "name": "TLSTEST", 00:22:03.829 "trtype": "tcp", 00:22:03.829 "traddr": "10.0.0.2", 00:22:03.829 "adrfam": "ipv4", 00:22:03.829 "trsvcid": "4420", 00:22:03.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.829 "prchk_reftag": false, 00:22:03.829 "prchk_guard": false, 00:22:03.829 "hdgst": false, 00:22:03.829 "ddgst": false, 00:22:03.829 "psk": "key0", 00:22:03.829 "allow_unrecognized_csi": false, 00:22:03.829 "method": "bdev_nvme_attach_controller", 00:22:03.829 "req_id": 1 00:22:03.829 } 00:22:03.829 Got JSON-RPC error response 00:22:03.829 response: 00:22:03.829 { 00:22:03.829 "code": -126, 00:22:03.829 "message": "Required key not available" 00:22:03.829 } 00:22:03.829 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2680480 00:22:03.829 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2680480 ']' 00:22:03.829 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2680480 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680480 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680480' 00:22:04.088 killing process with pid 2680480 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2680480 00:22:04.088 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.088 00:22:04.088 Latency(us) 00:22:04.088 [2024-12-09T09:32:41.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.088 [2024-12-09T09:32:41.812Z] =================================================================================================================== 00:22:04.088 [2024-12-09T09:32:41.812Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2680480 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2678389 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678389 ']' 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678389 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678389 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678389' 00:22:04.088 killing process with pid 2678389 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678389 00:22:04.088 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678389 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2680721 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2680721 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2680721 ']' 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.348 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.348 [2024-12-09 10:32:42.026185] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:04.348 [2024-12-09 10:32:42.026235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.607 [2024-12-09 10:32:42.102222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.607 [2024-12-09 10:32:42.141767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.607 [2024-12-09 10:32:42.141806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.607 [2024-12-09 10:32:42.141818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.607 [2024-12-09 10:32:42.141823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.607 [2024-12-09 10:32:42.141828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.607 [2024-12-09 10:32:42.142403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VU79ajSvIn 00:22:04.607 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.866 [2024-12-09 10:32:42.453111] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.866 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.125 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.125 [2024-12-09 10:32:42.846116] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.125 [2024-12-09 10:32:42.846299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.381 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.381 malloc0 00:22:05.381 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.639 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:05.896 [2024-12-09 10:32:43.443557] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VU79ajSvIn': 0100666 00:22:05.897 [2024-12-09 10:32:43.443578] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:05.897 request: 00:22:05.897 { 00:22:05.897 "name": "key0", 00:22:05.897 "path": "/tmp/tmp.VU79ajSvIn", 00:22:05.897 "method": "keyring_file_add_key", 00:22:05.897 "req_id": 1 00:22:05.897 } 00:22:05.897 Got JSON-RPC error response 00:22:05.897 response: 00:22:05.897 { 00:22:05.897 "code": -1, 00:22:05.897 "message": "Operation not permitted" 00:22:05.897 } 00:22:05.897 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:06.154 [2024-12-09 10:32:43.636081] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:06.154 [2024-12-09 10:32:43.636112] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:06.154 request: 00:22:06.154 { 00:22:06.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.154 "host": "nqn.2016-06.io.spdk:host1", 00:22:06.154 "psk": "key0", 00:22:06.154 "method": "nvmf_subsystem_add_host", 00:22:06.154 "req_id": 1 00:22:06.154 } 00:22:06.154 Got JSON-RPC error response 00:22:06.154 response: 00:22:06.154 { 00:22:06.154 "code": -32603, 00:22:06.154 "message": "Internal error" 00:22:06.154 } 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2680721 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2680721 ']' 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2680721 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680721 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680721' 00:22:06.154 killing process with pid 2680721 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2680721 00:22:06.154 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2680721 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VU79ajSvIn 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2680992 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2680992 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2680992 ']' 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.412 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 [2024-12-09 10:32:43.947402] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:06.412 [2024-12-09 10:32:43.947452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.412 [2024-12-09 10:32:44.023447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.412 [2024-12-09 10:32:44.063928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.412 [2024-12-09 10:32:44.063962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.412 [2024-12-09 10:32:44.063970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.412 [2024-12-09 10:32:44.063976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.412 [2024-12-09 10:32:44.063981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.412 [2024-12-09 10:32:44.064542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VU79ajSvIn 00:22:06.670 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.670 [2024-12-09 10:32:44.389683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.927 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.927 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.185 [2024-12-09 10:32:44.778660] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.185 [2024-12-09 10:32:44.778878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.185 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.443 malloc0 00:22:07.443 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:07.701 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:07.701 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2681308 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2681308 /var/tmp/bdevperf.sock 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2681308 ']' 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.958 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.958 [2024-12-09 10:32:45.628102] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:07.958 [2024-12-09 10:32:45.628148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681308 ] 00:22:08.215 [2024-12-09 10:32:45.702106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.215 [2024-12-09 10:32:45.744501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.215 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.215 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:08.215 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:08.473 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:08.473 [2024-12-09 10:32:46.175480] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.730 TLSTESTn1 00:22:08.730 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:08.988 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:08.988 "subsystems": [ 00:22:08.988 { 00:22:08.988 "subsystem": "keyring", 00:22:08.988 "config": [ 00:22:08.988 { 00:22:08.988 "method": "keyring_file_add_key", 00:22:08.988 "params": { 00:22:08.988 "name": "key0", 00:22:08.988 "path": "/tmp/tmp.VU79ajSvIn" 00:22:08.988 } 00:22:08.988 } 00:22:08.988 ] 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "subsystem": "iobuf", 00:22:08.988 "config": [ 00:22:08.988 { 00:22:08.988 "method": "iobuf_set_options", 00:22:08.988 "params": { 00:22:08.988 "small_pool_count": 8192, 00:22:08.988 "large_pool_count": 1024, 00:22:08.988 "small_bufsize": 8192, 00:22:08.988 "large_bufsize": 135168, 00:22:08.988 "enable_numa": false 00:22:08.988 } 00:22:08.988 } 00:22:08.988 ] 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "subsystem": "sock", 00:22:08.988 "config": [ 00:22:08.988 { 00:22:08.988 "method": "sock_set_default_impl", 00:22:08.988 "params": { 00:22:08.988 "impl_name": "posix" 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "sock_impl_set_options", 00:22:08.988 "params": { 00:22:08.988 "impl_name": "ssl", 00:22:08.988 "recv_buf_size": 4096, 00:22:08.988 "send_buf_size": 4096, 00:22:08.988 "enable_recv_pipe": true, 00:22:08.988 "enable_quickack": false, 00:22:08.988 "enable_placement_id": 0, 00:22:08.988 "enable_zerocopy_send_server": true, 00:22:08.988 "enable_zerocopy_send_client": false, 00:22:08.988 "zerocopy_threshold": 0, 00:22:08.988 "tls_version": 0, 00:22:08.988 "enable_ktls": false 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "sock_impl_set_options", 00:22:08.988 "params": { 00:22:08.988 "impl_name": "posix", 00:22:08.988 "recv_buf_size": 2097152, 00:22:08.988 "send_buf_size": 2097152, 00:22:08.988 "enable_recv_pipe": true, 00:22:08.988 "enable_quickack": false, 00:22:08.988 "enable_placement_id": 0, 00:22:08.988 "enable_zerocopy_send_server": true, 00:22:08.988 "enable_zerocopy_send_client": false, 00:22:08.988 "zerocopy_threshold": 0, 00:22:08.988 "tls_version": 0, 00:22:08.988 "enable_ktls": false 00:22:08.988 } 00:22:08.988 } 00:22:08.988 ] 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "subsystem": "vmd", 00:22:08.988 "config": [] 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "subsystem": "accel", 00:22:08.988 "config": [ 00:22:08.988 { 00:22:08.988 "method": "accel_set_options", 00:22:08.988 "params": { 00:22:08.988 "small_cache_size": 128, 00:22:08.988 "large_cache_size": 16, 00:22:08.988 "task_count": 2048, 00:22:08.988 "sequence_count": 2048, 00:22:08.988 "buf_count": 2048 00:22:08.988 } 00:22:08.988 } 00:22:08.988 ] 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "subsystem": "bdev", 00:22:08.988 "config": [ 00:22:08.988 { 00:22:08.988 "method": "bdev_set_options", 00:22:08.988 "params": { 00:22:08.988 "bdev_io_pool_size": 65535, 00:22:08.988 "bdev_io_cache_size": 256, 00:22:08.988 "bdev_auto_examine": true, 00:22:08.988 "iobuf_small_cache_size": 128, 00:22:08.988 "iobuf_large_cache_size": 16 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_raid_set_options", 00:22:08.988 "params": { 00:22:08.988 "process_window_size_kb": 1024, 00:22:08.988 "process_max_bandwidth_mb_sec": 0 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_iscsi_set_options", 00:22:08.988 "params": { 00:22:08.988 "timeout_sec": 30 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_nvme_set_options", 00:22:08.988 "params": { 00:22:08.988 "action_on_timeout": "none", 00:22:08.988 "timeout_us": 0, 00:22:08.988 "timeout_admin_us": 0, 00:22:08.988 "keep_alive_timeout_ms": 10000, 00:22:08.988 "arbitration_burst": 0, 00:22:08.988 "low_priority_weight": 0, 00:22:08.988 "medium_priority_weight": 0, 00:22:08.988 "high_priority_weight": 0, 00:22:08.988 "nvme_adminq_poll_period_us": 10000, 00:22:08.988 "nvme_ioq_poll_period_us": 0, 00:22:08.988 "io_queue_requests": 0, 00:22:08.988 "delay_cmd_submit": true, 00:22:08.988 "transport_retry_count": 4, 00:22:08.988 "bdev_retry_count": 3, 00:22:08.988 "transport_ack_timeout": 0, 00:22:08.988 "ctrlr_loss_timeout_sec": 0, 00:22:08.988 "reconnect_delay_sec": 0, 00:22:08.988 "fast_io_fail_timeout_sec": 0, 00:22:08.988 "disable_auto_failback": false, 00:22:08.988 "generate_uuids": false, 00:22:08.988 "transport_tos": 0, 00:22:08.988 "nvme_error_stat": false, 00:22:08.988 "rdma_srq_size": 0, 00:22:08.988 "io_path_stat": false, 00:22:08.988 "allow_accel_sequence": false, 00:22:08.988 "rdma_max_cq_size": 0, 00:22:08.988 "rdma_cm_event_timeout_ms": 0, 00:22:08.988 "dhchap_digests": [ 00:22:08.988 "sha256", 00:22:08.988 "sha384", 00:22:08.988 "sha512" 00:22:08.988 ], 00:22:08.988 "dhchap_dhgroups": [ 00:22:08.988 "null", 00:22:08.988 "ffdhe2048", 00:22:08.988 "ffdhe3072", 00:22:08.988 "ffdhe4096", 00:22:08.988 "ffdhe6144", 00:22:08.988 "ffdhe8192" 00:22:08.988 ] 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_nvme_set_hotplug", 00:22:08.988 "params": { 00:22:08.988 "period_us": 100000, 00:22:08.988 "enable": false 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_malloc_create", 00:22:08.988 "params": { 00:22:08.988 "name": "malloc0", 00:22:08.988 "num_blocks": 8192, 00:22:08.988 "block_size": 4096, 00:22:08.988 "physical_block_size": 4096, 00:22:08.988 "uuid": "a7fb69b5-8b53-405a-a8fb-2653b187cc6f", 00:22:08.988 "optimal_io_boundary": 0, 00:22:08.988 "md_size": 0, 00:22:08.988 "dif_type": 0, 00:22:08.988 "dif_is_head_of_md": false, 00:22:08.988 "dif_pi_format": 0 00:22:08.988 } 00:22:08.988 }, 00:22:08.988 { 00:22:08.988 "method": "bdev_wait_for_examine" 00:22:08.989 } 00:22:08.989 ] 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "subsystem": "nbd", 00:22:08.989 "config": [] 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "subsystem": "scheduler", 00:22:08.989 "config": [ 00:22:08.989 { 00:22:08.989 "method": "framework_set_scheduler", 00:22:08.989 "params": { 00:22:08.989 "name": "static" 00:22:08.989 } 00:22:08.989 } 00:22:08.989 ] 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "subsystem": "nvmf", 00:22:08.989 "config": [ 00:22:08.989 { 00:22:08.989 "method": "nvmf_set_config", 00:22:08.989 "params": { 00:22:08.989 "discovery_filter": "match_any", 00:22:08.989 "admin_cmd_passthru": { 00:22:08.989 "identify_ctrlr": false 00:22:08.989 }, 00:22:08.989 "dhchap_digests": [ 00:22:08.989 "sha256", 00:22:08.989 "sha384", 00:22:08.989 "sha512" 00:22:08.989 ], 00:22:08.989 "dhchap_dhgroups": [ 00:22:08.989 "null", 00:22:08.989 "ffdhe2048", 00:22:08.989 "ffdhe3072", 00:22:08.989 "ffdhe4096", 00:22:08.989 "ffdhe6144", 00:22:08.989 "ffdhe8192" 00:22:08.989 ] 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_set_max_subsystems", 00:22:08.989 "params": { 00:22:08.989 "max_subsystems": 1024 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_set_crdt", 00:22:08.989 "params": { 00:22:08.989 "crdt1": 0, 00:22:08.989 "crdt2": 0, 00:22:08.989 "crdt3": 0 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_create_transport", 00:22:08.989 "params": { 00:22:08.989 "trtype": "TCP", 00:22:08.989 "max_queue_depth": 128, 00:22:08.989 "max_io_qpairs_per_ctrlr": 127, 00:22:08.989 "in_capsule_data_size": 4096, 00:22:08.989 "max_io_size": 131072, 00:22:08.989 "io_unit_size": 131072, 00:22:08.989 "max_aq_depth": 128, 00:22:08.989 "num_shared_buffers": 511, 00:22:08.989 "buf_cache_size": 4294967295, 00:22:08.989 "dif_insert_or_strip": false, 00:22:08.989 "zcopy": false, 00:22:08.989 "c2h_success": false, 00:22:08.989 "sock_priority": 0, 00:22:08.989 "abort_timeout_sec": 1, 00:22:08.989 "ack_timeout": 0, 00:22:08.989 "data_wr_pool_size": 0 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_create_subsystem", 00:22:08.989 "params": { 00:22:08.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.989 "allow_any_host": false, 00:22:08.989 "serial_number": "SPDK00000000000001", 00:22:08.989 "model_number": "SPDK bdev Controller", 00:22:08.989 "max_namespaces": 10, 00:22:08.989 "min_cntlid": 1, 00:22:08.989 "max_cntlid": 65519, 00:22:08.989 "ana_reporting": false 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_subsystem_add_host", 00:22:08.989 "params": { 00:22:08.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.989 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.989 "psk": "key0" 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_subsystem_add_ns", 00:22:08.989 "params": { 00:22:08.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.989 "namespace": { 00:22:08.989 "nsid": 1, 00:22:08.989 "bdev_name": "malloc0", 00:22:08.989 "nguid": "A7FB69B58B53405AA8FB2653B187CC6F", 00:22:08.989 "uuid": "a7fb69b5-8b53-405a-a8fb-2653b187cc6f", 00:22:08.989 "no_auto_visible": false 00:22:08.989 } 00:22:08.989 } 00:22:08.989 }, 00:22:08.989 { 00:22:08.989 "method": "nvmf_subsystem_add_listener", 00:22:08.989 "params": { 00:22:08.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.989 "listen_address": { 00:22:08.989 "trtype": "TCP", 00:22:08.989 "adrfam": "IPv4", 00:22:08.989 "traddr": "10.0.0.2", 00:22:08.989 "trsvcid": "4420" 00:22:08.989 }, 00:22:08.989 "secure_channel": true 00:22:08.989 } 00:22:08.989 } 00:22:08.989 ] 00:22:08.989 } 00:22:08.989 ] 00:22:08.989 }' 00:22:08.989 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.248 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:09.248 "subsystems": [ 00:22:09.248 { 00:22:09.248 "subsystem": "keyring", 00:22:09.248 "config": [ 00:22:09.248 { 00:22:09.248 "method": "keyring_file_add_key", 00:22:09.248 "params": { 00:22:09.248 "name": "key0", 00:22:09.248 "path": "/tmp/tmp.VU79ajSvIn" 00:22:09.248 } 00:22:09.248 } 00:22:09.248 ] 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "subsystem": "iobuf", 00:22:09.248 "config": [ 00:22:09.248 { 00:22:09.248 "method": "iobuf_set_options", 00:22:09.248 "params": { 00:22:09.248 "small_pool_count": 8192, 00:22:09.248 "large_pool_count": 1024, 00:22:09.248 "small_bufsize": 8192, 00:22:09.248 "large_bufsize": 135168, 00:22:09.248 "enable_numa": false 00:22:09.248 } 00:22:09.248 } 00:22:09.248 ] 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "subsystem": "sock", 00:22:09.248 "config": [ 00:22:09.248 { 00:22:09.248 "method": "sock_set_default_impl", 00:22:09.248 "params": { 00:22:09.248 "impl_name": "posix" 00:22:09.248 } 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "method": "sock_impl_set_options", 00:22:09.248 "params": { 00:22:09.248 "impl_name": "ssl", 00:22:09.248 "recv_buf_size": 4096, 00:22:09.248 "send_buf_size": 4096, 00:22:09.248 "enable_recv_pipe": true, 00:22:09.248 "enable_quickack": false, 00:22:09.248 "enable_placement_id": 0, 00:22:09.248 "enable_zerocopy_send_server": true, 00:22:09.248 "enable_zerocopy_send_client": false, 00:22:09.248 "zerocopy_threshold": 0, 00:22:09.248 "tls_version": 0, 00:22:09.248 "enable_ktls": false 00:22:09.248 } 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "method": "sock_impl_set_options", 00:22:09.248 "params": { 00:22:09.248 "impl_name": "posix", 00:22:09.248 "recv_buf_size": 2097152, 00:22:09.248 "send_buf_size": 2097152, 00:22:09.248 "enable_recv_pipe": true, 00:22:09.248 "enable_quickack": false, 00:22:09.248 "enable_placement_id": 0, 00:22:09.248 "enable_zerocopy_send_server": true, 00:22:09.248 "enable_zerocopy_send_client": false, 00:22:09.248 "zerocopy_threshold": 0, 00:22:09.248 "tls_version": 0, 00:22:09.248 "enable_ktls": false 00:22:09.248 } 00:22:09.248 } 00:22:09.248 ] 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "subsystem": "vmd", 00:22:09.248 "config": [] 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "subsystem": "accel", 00:22:09.248 "config": [ 00:22:09.248 { 00:22:09.248 "method": "accel_set_options", 00:22:09.248 "params": { 00:22:09.248 "small_cache_size": 128, 00:22:09.248 "large_cache_size": 16, 00:22:09.248 "task_count": 2048, 00:22:09.248 "sequence_count": 2048, 00:22:09.248 "buf_count": 2048 00:22:09.248 } 00:22:09.248 } 00:22:09.248 ] 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "subsystem": "bdev", 00:22:09.248 "config": [ 00:22:09.248 { 00:22:09.248 "method": "bdev_set_options", 00:22:09.248 "params": { 00:22:09.248 "bdev_io_pool_size": 65535, 00:22:09.248 "bdev_io_cache_size": 256, 00:22:09.248 "bdev_auto_examine": true, 00:22:09.248 "iobuf_small_cache_size": 128, 00:22:09.248 "iobuf_large_cache_size": 16 00:22:09.248 } 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "method": "bdev_raid_set_options", 00:22:09.248 "params": { 00:22:09.248 "process_window_size_kb": 1024, 00:22:09.248 "process_max_bandwidth_mb_sec": 0 00:22:09.248 } 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "method": "bdev_iscsi_set_options", 00:22:09.248 "params": { 00:22:09.248 "timeout_sec": 30 00:22:09.248 } 00:22:09.248 }, 00:22:09.248 { 00:22:09.248 "method": "bdev_nvme_set_options", 00:22:09.248 "params": { 00:22:09.248 "action_on_timeout": "none", 00:22:09.248 "timeout_us": 0, 00:22:09.248 "timeout_admin_us": 0, 00:22:09.248 "keep_alive_timeout_ms": 10000, 00:22:09.248 "arbitration_burst": 0, 00:22:09.248 "low_priority_weight": 0, 00:22:09.248 "medium_priority_weight": 0, 00:22:09.248 "high_priority_weight": 0, 00:22:09.248 "nvme_adminq_poll_period_us": 10000, 00:22:09.248 "nvme_ioq_poll_period_us": 0, 00:22:09.248 "io_queue_requests": 512, 00:22:09.249 "delay_cmd_submit": true, 00:22:09.249 "transport_retry_count": 4, 00:22:09.249 "bdev_retry_count": 3, 00:22:09.249 "transport_ack_timeout": 0, 00:22:09.249 "ctrlr_loss_timeout_sec": 0, 00:22:09.249 "reconnect_delay_sec": 0, 00:22:09.249 "fast_io_fail_timeout_sec": 0, 00:22:09.249 "disable_auto_failback": false, 00:22:09.249 "generate_uuids": false, 00:22:09.249 "transport_tos": 0, 00:22:09.249 "nvme_error_stat": false, 00:22:09.249 "rdma_srq_size": 0, 00:22:09.249 "io_path_stat": false, 00:22:09.249 "allow_accel_sequence": false, 00:22:09.249 "rdma_max_cq_size": 0, 00:22:09.249 "rdma_cm_event_timeout_ms": 0, 00:22:09.249 "dhchap_digests": [ 00:22:09.249 "sha256", 00:22:09.249 "sha384", 00:22:09.249 "sha512" 00:22:09.249 ], 00:22:09.249 "dhchap_dhgroups": [ 00:22:09.249 "null", 00:22:09.249 "ffdhe2048", 00:22:09.249 "ffdhe3072", 00:22:09.249 "ffdhe4096", 00:22:09.249 "ffdhe6144", 00:22:09.249 "ffdhe8192" 00:22:09.249 ] 00:22:09.249 } 00:22:09.249 }, 00:22:09.249 { 00:22:09.249 "method": "bdev_nvme_attach_controller", 00:22:09.249 "params": { 00:22:09.249 "name": "TLSTEST", 00:22:09.249 "trtype": "TCP", 00:22:09.249 "adrfam": "IPv4", 00:22:09.249 "traddr": "10.0.0.2", 00:22:09.249 "trsvcid": "4420", 00:22:09.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.249 "prchk_reftag": false, 00:22:09.249 "prchk_guard": false, 00:22:09.249 "ctrlr_loss_timeout_sec": 0, 00:22:09.249 "reconnect_delay_sec": 0, 00:22:09.249 "fast_io_fail_timeout_sec": 0, 00:22:09.249 "psk": "key0", 00:22:09.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.249 "hdgst": false, 00:22:09.249 "ddgst": false, 00:22:09.249 "multipath": "multipath" 00:22:09.249 } 00:22:09.249 }, 00:22:09.249 { 00:22:09.249 "method": "bdev_nvme_set_hotplug", 00:22:09.249 "params": { 00:22:09.249 "period_us": 100000, 00:22:09.249 "enable": false 00:22:09.249 } 00:22:09.249 }, 00:22:09.249 { 00:22:09.249 "method": "bdev_wait_for_examine" 00:22:09.249 } 00:22:09.249 ] 00:22:09.249 }, 00:22:09.249 { 00:22:09.249 "subsystem": "nbd", 00:22:09.249 "config": [] 00:22:09.249 } 00:22:09.249 ] 00:22:09.249 }' 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2681308 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2681308 ']' 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2681308 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681308 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681308' 00:22:09.249 killing process with pid 2681308 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2681308 00:22:09.249 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.249 00:22:09.249 Latency(us) 00:22:09.249 [2024-12-09T09:32:46.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.249 [2024-12-09T09:32:46.973Z] =================================================================================================================== 00:22:09.249 [2024-12-09T09:32:46.973Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.249 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2681308 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2680992 ']' 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680992' 00:22:09.508 killing process with pid 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2680992 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.508 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:09.508 "subsystems": [ 00:22:09.508 { 00:22:09.508 "subsystem": "keyring", 00:22:09.508 "config": [ 00:22:09.508 { 00:22:09.508 "method": "keyring_file_add_key", 00:22:09.508 "params": { 00:22:09.508 "name": "key0", 00:22:09.509 "path": "/tmp/tmp.VU79ajSvIn" 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "iobuf", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "iobuf_set_options", 00:22:09.509 "params": { 00:22:09.509 "small_pool_count": 8192, 00:22:09.509 "large_pool_count": 1024, 00:22:09.509 "small_bufsize": 8192, 00:22:09.509 "large_bufsize": 135168, 00:22:09.509 "enable_numa": false 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "sock", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "sock_set_default_impl", 00:22:09.509 "params": { 00:22:09.509 "impl_name": "posix" 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "sock_impl_set_options", 00:22:09.509 "params": { 00:22:09.509 "impl_name": "ssl", 00:22:09.509 "recv_buf_size": 4096, 00:22:09.509 "send_buf_size": 4096, 00:22:09.509 "enable_recv_pipe": true, 00:22:09.509 "enable_quickack": false, 00:22:09.509 "enable_placement_id": 0, 00:22:09.509 "enable_zerocopy_send_server": true, 00:22:09.509 "enable_zerocopy_send_client": false, 00:22:09.509 "zerocopy_threshold": 0, 00:22:09.509 "tls_version": 0, 00:22:09.509 "enable_ktls": false 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "sock_impl_set_options", 00:22:09.509 "params": { 00:22:09.509 "impl_name": "posix", 00:22:09.509 "recv_buf_size": 2097152, 00:22:09.509 "send_buf_size": 2097152, 00:22:09.509 "enable_recv_pipe": true, 00:22:09.509 "enable_quickack": false, 00:22:09.509 "enable_placement_id": 0, 00:22:09.509 "enable_zerocopy_send_server": true, 00:22:09.509 "enable_zerocopy_send_client": false, 00:22:09.509 "zerocopy_threshold": 0, 00:22:09.509 "tls_version": 0, 00:22:09.509 "enable_ktls": false 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "vmd", 00:22:09.509 "config": [] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "accel", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "accel_set_options", 00:22:09.509 "params": { 00:22:09.509 "small_cache_size": 128, 00:22:09.509 "large_cache_size": 16, 00:22:09.509 "task_count": 2048, 00:22:09.509 "sequence_count": 2048, 00:22:09.509 "buf_count": 2048 00:22:09.509 } 00:22:09.509 } 00:22:09.509 ] 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "subsystem": "bdev", 00:22:09.509 "config": [ 00:22:09.509 { 00:22:09.509 "method": "bdev_set_options", 00:22:09.509 "params": { 00:22:09.509 "bdev_io_pool_size": 65535, 00:22:09.509 "bdev_io_cache_size": 256, 00:22:09.509 "bdev_auto_examine": true, 00:22:09.509 "iobuf_small_cache_size": 128, 00:22:09.509 "iobuf_large_cache_size": 16 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_raid_set_options", 00:22:09.509 "params": { 00:22:09.509 "process_window_size_kb": 1024, 00:22:09.509 "process_max_bandwidth_mb_sec": 0 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_iscsi_set_options", 00:22:09.509 "params": { 00:22:09.509 "timeout_sec": 30 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_nvme_set_options", 00:22:09.509 "params": { 00:22:09.509 "action_on_timeout": "none", 00:22:09.509 "timeout_us": 0, 00:22:09.509 "timeout_admin_us": 0, 00:22:09.509 "keep_alive_timeout_ms": 10000, 00:22:09.509 "arbitration_burst": 0, 00:22:09.509 "low_priority_weight": 0, 00:22:09.509 "medium_priority_weight": 0, 00:22:09.509 "high_priority_weight": 0, 00:22:09.509 "nvme_adminq_poll_period_us": 10000, 00:22:09.509 "nvme_ioq_poll_period_us": 0, 00:22:09.509 "io_queue_requests": 0, 00:22:09.509 "delay_cmd_submit": true, 00:22:09.509 "transport_retry_count": 4, 00:22:09.509 "bdev_retry_count": 3, 00:22:09.509 "transport_ack_timeout": 0, 00:22:09.509 "ctrlr_loss_timeout_sec": 0, 00:22:09.509 "reconnect_delay_sec": 0, 00:22:09.509 "fast_io_fail_timeout_sec": 0, 00:22:09.509 "disable_auto_failback": false, 00:22:09.509 "generate_uuids": false, 00:22:09.509 "transport_tos": 0, 00:22:09.509 "nvme_error_stat": false, 00:22:09.509 "rdma_srq_size": 0, 00:22:09.509 "io_path_stat": false, 00:22:09.509 "allow_accel_sequence": false, 00:22:09.509 "rdma_max_cq_size": 0, 00:22:09.509 "rdma_cm_event_timeout_ms": 0, 00:22:09.509 "dhchap_digests": [ 00:22:09.509 "sha256", 00:22:09.509 "sha384", 00:22:09.509 "sha512" 00:22:09.509 ], 00:22:09.509 "dhchap_dhgroups": [ 00:22:09.509 "null", 00:22:09.509 "ffdhe2048", 00:22:09.509 "ffdhe3072", 00:22:09.509 "ffdhe4096", 00:22:09.509 "ffdhe6144", 00:22:09.509 "ffdhe8192" 00:22:09.509 ] 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_nvme_set_hotplug", 00:22:09.509 "params": { 00:22:09.509 "period_us": 100000, 00:22:09.509 "enable": false 00:22:09.509 } 00:22:09.509 }, 00:22:09.509 { 00:22:09.509 "method": "bdev_malloc_create", 00:22:09.509 "params": { 00:22:09.509 "name": "malloc0", 00:22:09.509 "num_blocks": 8192, 00:22:09.509 "block_size": 4096, 00:22:09.509 "physical_block_size": 4096, 00:22:09.509 "uuid": "a7fb69b5-8b53-405a-a8fb-2653b187cc6f", 00:22:09.510 "optimal_io_boundary": 0, 00:22:09.510 "md_size": 0, 00:22:09.510 "dif_type": 0, 00:22:09.510 "dif_is_head_of_md": false, 00:22:09.510 "dif_pi_format": 0 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "bdev_wait_for_examine" 00:22:09.510 } 00:22:09.510 ] 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "subsystem": "nbd", 00:22:09.510 "config": [] 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "subsystem": "scheduler", 00:22:09.510 "config": [ 00:22:09.510 { 00:22:09.510 "method": "framework_set_scheduler", 00:22:09.510 "params": { 00:22:09.510 "name": "static" 00:22:09.510 } 00:22:09.510 } 00:22:09.510 ] 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "subsystem": "nvmf", 00:22:09.510 "config": [ 00:22:09.510 { 00:22:09.510 "method": "nvmf_set_config", 00:22:09.510 "params": { 00:22:09.510 "discovery_filter": "match_any", 00:22:09.510 "admin_cmd_passthru": { 00:22:09.510 "identify_ctrlr": false 00:22:09.510 }, 00:22:09.510 "dhchap_digests": [ 00:22:09.510 "sha256", 00:22:09.510 "sha384", 00:22:09.510 "sha512" 00:22:09.510 ], 00:22:09.510 "dhchap_dhgroups": [ 00:22:09.510 "null", 00:22:09.510 "ffdhe2048", 00:22:09.510 "ffdhe3072", 00:22:09.510 "ffdhe4096", 00:22:09.510 "ffdhe6144", 00:22:09.510 "ffdhe8192" 00:22:09.510 ] 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_set_max_subsystems", 00:22:09.510 "params": { 00:22:09.510 "max_subsystems": 1024 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_set_crdt", 00:22:09.510 "params": { 00:22:09.510 "crdt1": 0, 00:22:09.510 "crdt2": 0, 00:22:09.510 "crdt3": 0 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_create_transport", 00:22:09.510 "params": { 00:22:09.510 "trtype": "TCP", 00:22:09.510 "max_queue_depth": 128, 00:22:09.510 "max_io_qpairs_per_ctrlr": 127, 00:22:09.510 "in_capsule_data_size": 4096, 00:22:09.510 "max_io_size": 131072, 00:22:09.510 "io_unit_size": 131072, 00:22:09.510 "max_aq_depth": 128, 00:22:09.510 "num_shared_buffers": 511, 00:22:09.510 "buf_cache_size": 4294967295, 00:22:09.510 "dif_insert_or_strip": false, 00:22:09.510 "zcopy": false, 00:22:09.510 "c2h_success": false, 00:22:09.510 "sock_priority": 0, 00:22:09.510 "abort_timeout_sec": 1, 00:22:09.510 "ack_timeout": 0, 00:22:09.510 "data_wr_pool_size": 0 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_create_subsystem", 00:22:09.510 "params": { 00:22:09.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.510 "allow_any_host": false, 00:22:09.510 "serial_number": "SPDK00000000000001", 00:22:09.510 "model_number": "SPDK bdev Controller", 00:22:09.510 "max_namespaces": 10, 00:22:09.510 "min_cntlid": 1, 00:22:09.510 "max_cntlid": 65519, 00:22:09.510 "ana_reporting": false 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_subsystem_add_host", 00:22:09.510 "params": { 00:22:09.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.510 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.510 "psk": "key0" 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_subsystem_add_ns", 00:22:09.510 "params": { 00:22:09.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.510 "namespace": { 00:22:09.510 "nsid": 1, 00:22:09.510 "bdev_name": "malloc0", 00:22:09.510 "nguid": "A7FB69B58B53405AA8FB2653B187CC6F", 00:22:09.510 "uuid": "a7fb69b5-8b53-405a-a8fb-2653b187cc6f", 00:22:09.510 "no_auto_visible": false 00:22:09.510 } 00:22:09.510 } 00:22:09.510 }, 00:22:09.510 { 00:22:09.510 "method": "nvmf_subsystem_add_listener", 00:22:09.510 "params": { 00:22:09.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.510 "listen_address": { 00:22:09.510 "trtype": "TCP", 00:22:09.510 "adrfam": "IPv4", 00:22:09.510 "traddr": "10.0.0.2", 00:22:09.510 "trsvcid": "4420" 00:22:09.510 }, 00:22:09.510 "secure_channel": true 00:22:09.510 } 00:22:09.510 } 00:22:09.510 ] 00:22:09.510 } 00:22:09.510 ] 00:22:09.510 }' 00:22:09.510 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2681690 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2681690 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2681690 ']' 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.770 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.770 [2024-12-09 10:32:47.284096] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:09.770 [2024-12-09 10:32:47.284150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.770 [2024-12-09 10:32:47.361032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.770 [2024-12-09 10:32:47.401445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.770 [2024-12-09 10:32:47.401480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.770 [2024-12-09 10:32:47.401488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.770 [2024-12-09 10:32:47.401495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.770 [2024-12-09 10:32:47.401501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.770 [2024-12-09 10:32:47.402096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.030 [2024-12-09 10:32:47.616125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.030 [2024-12-09 10:32:47.648147] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.030 [2024-12-09 10:32:47.648342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2681739 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2681739 /var/tmp/bdevperf.sock 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2681739 ']' 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.598 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:10.598 "subsystems": [ 00:22:10.598 { 00:22:10.598 "subsystem": "keyring", 00:22:10.598 "config": [ 00:22:10.598 { 00:22:10.598 "method": "keyring_file_add_key", 00:22:10.598 "params": { 00:22:10.598 "name": "key0", 00:22:10.598 "path": "/tmp/tmp.VU79ajSvIn" 00:22:10.598 } 00:22:10.598 } 00:22:10.598 ] 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "subsystem": "iobuf", 00:22:10.598 "config": [ 00:22:10.598 { 00:22:10.598 "method": "iobuf_set_options", 00:22:10.598 "params": { 00:22:10.598 "small_pool_count": 8192, 00:22:10.598 "large_pool_count": 1024, 00:22:10.598 "small_bufsize": 8192, 00:22:10.598 "large_bufsize": 135168, 00:22:10.598 "enable_numa": false 00:22:10.598 } 00:22:10.598 } 00:22:10.598 ] 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "subsystem": "sock", 00:22:10.598 "config": [ 00:22:10.598 { 00:22:10.598 "method": "sock_set_default_impl", 00:22:10.598 "params": { 00:22:10.598 "impl_name": "posix" 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "sock_impl_set_options", 00:22:10.598 "params": { 00:22:10.598 "impl_name": "ssl", 00:22:10.598 "recv_buf_size": 4096, 00:22:10.598 "send_buf_size": 4096, 00:22:10.598 "enable_recv_pipe": true, 00:22:10.598 "enable_quickack": false, 00:22:10.598 "enable_placement_id": 0, 00:22:10.598 "enable_zerocopy_send_server": true, 00:22:10.598 "enable_zerocopy_send_client": false, 00:22:10.598 "zerocopy_threshold": 0, 00:22:10.598 "tls_version": 0, 00:22:10.598 "enable_ktls": false 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "sock_impl_set_options", 00:22:10.598 "params": { 00:22:10.598 "impl_name": "posix", 00:22:10.598 "recv_buf_size": 2097152, 00:22:10.598 "send_buf_size": 2097152, 00:22:10.598 "enable_recv_pipe": true, 00:22:10.598 "enable_quickack": false, 00:22:10.598 "enable_placement_id": 0, 00:22:10.598 "enable_zerocopy_send_server": true, 00:22:10.598 "enable_zerocopy_send_client": false, 00:22:10.598 "zerocopy_threshold": 0, 00:22:10.598 "tls_version": 0, 00:22:10.598 "enable_ktls": false 00:22:10.598 } 00:22:10.598 } 00:22:10.598 ] 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "subsystem": "vmd", 00:22:10.598 "config": [] 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "subsystem": "accel", 00:22:10.598 "config": [ 00:22:10.598 { 00:22:10.598 "method": "accel_set_options", 00:22:10.598 "params": { 00:22:10.598 "small_cache_size": 128, 00:22:10.598 "large_cache_size": 16, 00:22:10.598 "task_count": 2048, 00:22:10.598 "sequence_count": 2048, 00:22:10.598 "buf_count": 2048 00:22:10.598 } 00:22:10.598 } 00:22:10.598 ] 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "subsystem": "bdev", 00:22:10.598 "config": [ 00:22:10.598 { 00:22:10.598 "method": "bdev_set_options", 00:22:10.598 "params": { 00:22:10.598 "bdev_io_pool_size": 65535, 00:22:10.598 "bdev_io_cache_size": 256, 00:22:10.598 "bdev_auto_examine": true, 00:22:10.598 "iobuf_small_cache_size": 128, 00:22:10.598 "iobuf_large_cache_size": 16 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "bdev_raid_set_options", 00:22:10.598 "params": { 00:22:10.598 "process_window_size_kb": 1024, 00:22:10.598 "process_max_bandwidth_mb_sec": 0 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "bdev_iscsi_set_options", 00:22:10.598 "params": { 00:22:10.598 "timeout_sec": 30 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "bdev_nvme_set_options", 00:22:10.598 "params": { 00:22:10.598 "action_on_timeout": "none", 00:22:10.598 "timeout_us": 0, 00:22:10.598 "timeout_admin_us": 0, 00:22:10.598 "keep_alive_timeout_ms": 10000, 00:22:10.598 "arbitration_burst": 0, 00:22:10.598 "low_priority_weight": 0, 00:22:10.598 "medium_priority_weight": 0, 00:22:10.598 "high_priority_weight": 0, 00:22:10.598 "nvme_adminq_poll_period_us": 10000, 00:22:10.598 "nvme_ioq_poll_period_us": 0, 00:22:10.598 "io_queue_requests": 512, 00:22:10.598 "delay_cmd_submit": true, 00:22:10.598 "transport_retry_count": 4, 00:22:10.598 "bdev_retry_count": 3, 00:22:10.598 "transport_ack_timeout": 0, 00:22:10.598 "ctrlr_loss_timeout_sec": 0, 00:22:10.598 "reconnect_delay_sec": 0, 00:22:10.598 "fast_io_fail_timeout_sec": 0, 00:22:10.598 "disable_auto_failback": false, 00:22:10.598 "generate_uuids": false, 00:22:10.598 "transport_tos": 0, 00:22:10.598 "nvme_error_stat": false, 00:22:10.598 "rdma_srq_size": 0, 00:22:10.598 "io_path_stat": false, 00:22:10.598 "allow_accel_sequence": false, 00:22:10.598 "rdma_max_cq_size": 0, 00:22:10.598 "rdma_cm_event_timeout_ms": 0, 00:22:10.598 "dhchap_digests": [ 00:22:10.598 "sha256", 00:22:10.598 "sha384", 00:22:10.598 "sha512" 00:22:10.598 ], 00:22:10.598 "dhchap_dhgroups": [ 00:22:10.598 "null", 00:22:10.598 "ffdhe2048", 00:22:10.598 "ffdhe3072", 00:22:10.598 "ffdhe4096", 00:22:10.598 "ffdhe6144", 00:22:10.598 "ffdhe8192" 00:22:10.598 ] 00:22:10.598 } 00:22:10.598 }, 00:22:10.598 { 00:22:10.598 "method": "bdev_nvme_attach_controller", 00:22:10.598 "params": { 00:22:10.598 "name": "TLSTEST", 00:22:10.598 "trtype": "TCP", 00:22:10.598 "adrfam": "IPv4", 00:22:10.598 "traddr": "10.0.0.2", 00:22:10.598 "trsvcid": "4420", 00:22:10.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.598 "prchk_reftag": false, 00:22:10.598 "prchk_guard": false, 00:22:10.598 "ctrlr_loss_timeout_sec": 0, 00:22:10.598 "reconnect_delay_sec": 0, 00:22:10.598 "fast_io_fail_timeout_sec": 0, 00:22:10.598 "psk": "key0", 00:22:10.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.599 "hdgst": false, 00:22:10.599 "ddgst": false, 00:22:10.599 "multipath": "multipath" 00:22:10.599 } 00:22:10.599 }, 00:22:10.599 { 00:22:10.599 "method": "bdev_nvme_set_hotplug", 00:22:10.599 "params": { 00:22:10.599 "period_us": 100000, 00:22:10.599 "enable": false 00:22:10.599 } 00:22:10.599 }, 00:22:10.599 { 00:22:10.599 "method": "bdev_wait_for_examine" 00:22:10.599 } 00:22:10.599 ] 00:22:10.599 }, 00:22:10.599 { 00:22:10.599 "subsystem": "nbd", 00:22:10.599 "config": [] 00:22:10.599 } 00:22:10.599 ] 00:22:10.599 }' 00:22:10.599 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.599 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.599 [2024-12-09 10:32:48.192277] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:10.599 [2024-12-09 10:32:48.192324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681739 ] 00:22:10.599 [2024-12-09 10:32:48.268834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.599 [2024-12-09 10:32:48.310915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.857 [2024-12-09 10:32:48.463512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.424 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.424 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:11.424 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.424 Running I/O for 10 seconds... 00:22:13.412 5376.00 IOPS, 21.00 MiB/s [2024-12-09T09:32:52.525Z] 5440.50 IOPS, 21.25 MiB/s [2024-12-09T09:32:53.458Z] 5505.00 IOPS, 21.50 MiB/s [2024-12-09T09:32:54.389Z] 5524.00 IOPS, 21.58 MiB/s [2024-12-09T09:32:55.322Z] 5552.00 IOPS, 21.69 MiB/s [2024-12-09T09:32:56.254Z] 5523.67 IOPS, 21.58 MiB/s [2024-12-09T09:32:57.190Z] 5543.00 IOPS, 21.65 MiB/s [2024-12-09T09:32:58.567Z] 5551.00 IOPS, 21.68 MiB/s [2024-12-09T09:32:59.503Z] 5506.89 IOPS, 21.51 MiB/s [2024-12-09T09:32:59.503Z] 5520.40 IOPS, 21.56 MiB/s 00:22:21.779 Latency(us) 00:22:21.779 [2024-12-09T09:32:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.779 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.779 Verification LBA range: start 0x0 length 0x2000 00:22:21.779 TLSTESTn1 : 10.02 5524.10 21.58 0.00 0.00 23135.09 4899.60 37698.80 00:22:21.779 [2024-12-09T09:32:59.503Z] =================================================================================================================== 00:22:21.779 [2024-12-09T09:32:59.503Z] Total : 5524.10 21.58 0.00 0.00 23135.09 4899.60 37698.80 00:22:21.779 { 00:22:21.779 "results": [ 00:22:21.779 { 00:22:21.779 "job": "TLSTESTn1", 00:22:21.779 "core_mask": "0x4", 00:22:21.779 "workload": "verify", 00:22:21.779 "status": "finished", 00:22:21.779 "verify_range": { 00:22:21.779 "start": 0, 00:22:21.779 "length": 8192 00:22:21.779 }, 00:22:21.779 "queue_depth": 128, 00:22:21.779 "io_size": 4096, 00:22:21.779 "runtime": 10.016288, 00:22:21.779 "iops": 5524.102342105179, 00:22:21.779 "mibps": 21.578524773848354, 00:22:21.779 "io_failed": 0, 00:22:21.779 "io_timeout": 0, 00:22:21.779 "avg_latency_us": 23135.089389208326, 00:22:21.779 "min_latency_us": 4899.596190476191, 00:22:21.779 "max_latency_us": 37698.80380952381 00:22:21.779 } 00:22:21.779 ], 00:22:21.779 "core_count": 1 00:22:21.779 } 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2681739 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2681739 ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2681739 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681739 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681739' 00:22:21.779 killing process with pid 2681739 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2681739 00:22:21.779 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.779 00:22:21.779 Latency(us) 00:22:21.779 [2024-12-09T09:32:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.779 [2024-12-09T09:32:59.503Z] =================================================================================================================== 00:22:21.779 [2024-12-09T09:32:59.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2681739 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2681690 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2681690 ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2681690 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681690 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681690' 00:22:21.779 killing process with pid 2681690 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2681690 00:22:21.779 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2681690 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2683584 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2683584 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2683584 ']' 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.038 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.038 [2024-12-09 10:32:59.670214] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:22.038 [2024-12-09 10:32:59.670264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.038 [2024-12-09 10:32:59.749050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.297 [2024-12-09 10:32:59.785690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.297 [2024-12-09 10:32:59.785722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.297 [2024-12-09 10:32:59.785729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.297 [2024-12-09 10:32:59.785734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.297 [2024-12-09 10:32:59.785738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.297 [2024-12-09 10:32:59.786308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VU79ajSvIn 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VU79ajSvIn 00:22:22.297 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.555 [2024-12-09 10:33:00.111667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.555 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:22.813 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:22.813 [2024-12-09 10:33:00.520715] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.813 [2024-12-09 10:33:00.520915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.072 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.072 malloc0 00:22:23.072 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.332 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:23.590 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2684001 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2684001 /var/tmp/bdevperf.sock 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2684001 ']' 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.850 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.850 [2024-12-09 10:33:01.368854] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:23.850 [2024-12-09 10:33:01.368908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684001 ] 00:22:23.850 [2024-12-09 10:33:01.446353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.850 [2024-12-09 10:33:01.486590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.108 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.108 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:24.108 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:24.108 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.367 [2024-12-09 10:33:01.959169] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.367 nvme0n1 00:22:24.367 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.626 Running I/O for 1 seconds... 00:22:25.584 5358.00 IOPS, 20.93 MiB/s 00:22:25.584 Latency(us) 00:22:25.584 [2024-12-09T09:33:03.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.584 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:25.584 Verification LBA range: start 0x0 length 0x2000 00:22:25.584 nvme0n1 : 1.01 5411.46 21.14 0.00 0.00 23495.02 5554.96 30333.81 00:22:25.584 [2024-12-09T09:33:03.308Z] =================================================================================================================== 00:22:25.584 [2024-12-09T09:33:03.308Z] Total : 5411.46 21.14 0.00 0.00 23495.02 5554.96 30333.81 00:22:25.584 { 00:22:25.584 "results": [ 00:22:25.584 { 00:22:25.584 "job": "nvme0n1", 00:22:25.584 "core_mask": "0x2", 00:22:25.584 "workload": "verify", 00:22:25.584 "status": "finished", 00:22:25.584 "verify_range": { 00:22:25.584 "start": 0, 00:22:25.584 "length": 8192 00:22:25.584 }, 00:22:25.584 "queue_depth": 128, 00:22:25.584 "io_size": 4096, 00:22:25.584 "runtime": 1.013774, 00:22:25.584 "iops": 5411.462515314064, 00:22:25.584 "mibps": 21.138525450445563, 00:22:25.584 "io_failed": 0, 00:22:25.584 "io_timeout": 0, 00:22:25.584 "avg_latency_us": 23495.019501067654, 00:22:25.584 "min_latency_us": 5554.95619047619, 00:22:25.584 "max_latency_us": 30333.805714285714 00:22:25.584 } 00:22:25.584 ], 00:22:25.584 "core_count": 1 00:22:25.584 } 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2684001 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2684001 ']' 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2684001 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684001 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684001' 00:22:25.584 killing process with pid 2684001 00:22:25.584 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2684001 00:22:25.584 Received shutdown signal, test time was about 1.000000 seconds 00:22:25.584 00:22:25.584 Latency(us) 00:22:25.584 [2024-12-09T09:33:03.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.584 [2024-12-09T09:33:03.308Z] =================================================================================================================== 00:22:25.584 [2024-12-09T09:33:03.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.585 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2684001 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2683584 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2683584 ']' 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2683584 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683584 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683584' 00:22:25.843 killing process with pid 2683584 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2683584 00:22:25.843 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2683584 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2684434 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2684434 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2684434 ']' 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.101 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.101 [2024-12-09 10:33:03.670668] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:26.101 [2024-12-09 10:33:03.670718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.101 [2024-12-09 10:33:03.749936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.101 [2024-12-09 10:33:03.785997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.101 [2024-12-09 10:33:03.786031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.101 [2024-12-09 10:33:03.786039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.101 [2024-12-09 10:33:03.786046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.101 [2024-12-09 10:33:03.786050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.101 [2024-12-09 10:33:03.786637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.359 [2024-12-09 10:33:03.934923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.359 malloc0 00:22:26.359 [2024-12-09 10:33:03.963166] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.359 [2024-12-09 10:33:03.963377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2684464 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2684464 /var/tmp/bdevperf.sock 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2684464 ']' 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.359 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.359 [2024-12-09 10:33:04.038444] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:26.359 [2024-12-09 10:33:04.038483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684464 ] 00:22:26.617 [2024-12-09 10:33:04.115648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.617 [2024-12-09 10:33:04.157221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.617 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.617 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.617 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VU79ajSvIn 00:22:26.875 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:26.875 [2024-12-09 10:33:04.593222] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.133 nvme0n1 00:22:27.133 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.133 Running I/O for 1 seconds... 00:22:28.068 5207.00 IOPS, 20.34 MiB/s 00:22:28.068 Latency(us) 00:22:28.068 [2024-12-09T09:33:05.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.069 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.069 Verification LBA range: start 0x0 length 0x2000 00:22:28.069 nvme0n1 : 1.02 5250.34 20.51 0.00 0.00 24201.20 6709.64 23842.62 00:22:28.069 [2024-12-09T09:33:05.793Z] =================================================================================================================== 00:22:28.069 [2024-12-09T09:33:05.793Z] Total : 5250.34 20.51 0.00 0.00 24201.20 6709.64 23842.62 00:22:28.069 { 00:22:28.069 "results": [ 00:22:28.069 { 00:22:28.069 "job": "nvme0n1", 00:22:28.069 "core_mask": "0x2", 00:22:28.069 "workload": "verify", 00:22:28.069 "status": "finished", 00:22:28.069 "verify_range": { 00:22:28.069 "start": 0, 00:22:28.069 "length": 8192 00:22:28.069 }, 00:22:28.069 "queue_depth": 128, 00:22:28.069 "io_size": 4096, 00:22:28.069 "runtime": 1.016125, 00:22:28.069 "iops": 5250.338294993234, 00:22:28.069 "mibps": 20.50913396481732, 00:22:28.069 "io_failed": 0, 00:22:28.069 "io_timeout": 0, 00:22:28.069 "avg_latency_us": 24201.204392198866, 00:22:28.069 "min_latency_us": 6709.638095238095, 00:22:28.069 "max_latency_us": 23842.620952380952 00:22:28.069 } 00:22:28.069 ], 00:22:28.069 "core_count": 1 00:22:28.069 } 00:22:28.328 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:28.328 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.328 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.328 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.328 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:28.328 "subsystems": [ 00:22:28.328 { 00:22:28.328 "subsystem": "keyring", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "keyring_file_add_key", 00:22:28.328 "params": { 00:22:28.328 "name": "key0", 00:22:28.328 "path": "/tmp/tmp.VU79ajSvIn" 00:22:28.328 } 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "iobuf", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "iobuf_set_options", 00:22:28.328 "params": { 00:22:28.328 "small_pool_count": 8192, 00:22:28.328 "large_pool_count": 1024, 00:22:28.328 "small_bufsize": 8192, 00:22:28.328 "large_bufsize": 135168, 00:22:28.328 "enable_numa": false 00:22:28.328 } 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "sock", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "sock_set_default_impl", 00:22:28.328 "params": { 00:22:28.328 "impl_name": "posix" 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "sock_impl_set_options", 00:22:28.328 "params": { 00:22:28.328 "impl_name": "ssl", 00:22:28.328 "recv_buf_size": 4096, 00:22:28.328 "send_buf_size": 4096, 00:22:28.328 "enable_recv_pipe": true, 00:22:28.328 "enable_quickack": false, 00:22:28.328 "enable_placement_id": 0, 00:22:28.328 "enable_zerocopy_send_server": true, 00:22:28.328 "enable_zerocopy_send_client": false, 00:22:28.328 "zerocopy_threshold": 0, 00:22:28.328 "tls_version": 0, 00:22:28.328 "enable_ktls": false 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "sock_impl_set_options", 00:22:28.328 "params": { 00:22:28.328 "impl_name": "posix", 00:22:28.328 "recv_buf_size": 2097152, 00:22:28.328 "send_buf_size": 2097152, 00:22:28.328 "enable_recv_pipe": true, 00:22:28.328 "enable_quickack": false, 00:22:28.328 "enable_placement_id": 0, 00:22:28.328 "enable_zerocopy_send_server": true, 00:22:28.328 "enable_zerocopy_send_client": false, 00:22:28.328 "zerocopy_threshold": 0, 00:22:28.328 "tls_version": 0, 00:22:28.328 "enable_ktls": false 00:22:28.328 } 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "vmd", 00:22:28.328 "config": [] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "accel", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "accel_set_options", 00:22:28.328 "params": { 00:22:28.328 "small_cache_size": 128, 00:22:28.328 "large_cache_size": 16, 00:22:28.328 "task_count": 2048, 00:22:28.328 "sequence_count": 2048, 00:22:28.328 "buf_count": 2048 00:22:28.328 } 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "bdev", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "bdev_set_options", 00:22:28.328 "params": { 00:22:28.328 "bdev_io_pool_size": 65535, 00:22:28.328 "bdev_io_cache_size": 256, 00:22:28.328 "bdev_auto_examine": true, 00:22:28.328 "iobuf_small_cache_size": 128, 00:22:28.328 "iobuf_large_cache_size": 16 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_raid_set_options", 00:22:28.328 "params": { 00:22:28.328 "process_window_size_kb": 1024, 00:22:28.328 "process_max_bandwidth_mb_sec": 0 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_iscsi_set_options", 00:22:28.328 "params": { 00:22:28.328 "timeout_sec": 30 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_nvme_set_options", 00:22:28.328 "params": { 00:22:28.328 "action_on_timeout": "none", 00:22:28.328 "timeout_us": 0, 00:22:28.328 "timeout_admin_us": 0, 00:22:28.328 "keep_alive_timeout_ms": 10000, 00:22:28.328 "arbitration_burst": 0, 00:22:28.328 "low_priority_weight": 0, 00:22:28.328 "medium_priority_weight": 0, 00:22:28.328 "high_priority_weight": 0, 00:22:28.328 "nvme_adminq_poll_period_us": 10000, 00:22:28.328 "nvme_ioq_poll_period_us": 0, 00:22:28.328 "io_queue_requests": 0, 00:22:28.328 "delay_cmd_submit": true, 00:22:28.328 "transport_retry_count": 4, 00:22:28.328 "bdev_retry_count": 3, 00:22:28.328 "transport_ack_timeout": 0, 00:22:28.328 "ctrlr_loss_timeout_sec": 0, 00:22:28.328 "reconnect_delay_sec": 0, 00:22:28.328 "fast_io_fail_timeout_sec": 0, 00:22:28.328 "disable_auto_failback": false, 00:22:28.328 "generate_uuids": false, 00:22:28.328 "transport_tos": 0, 00:22:28.328 "nvme_error_stat": false, 00:22:28.328 "rdma_srq_size": 0, 00:22:28.328 "io_path_stat": false, 00:22:28.328 "allow_accel_sequence": false, 00:22:28.328 "rdma_max_cq_size": 0, 00:22:28.328 "rdma_cm_event_timeout_ms": 0, 00:22:28.328 "dhchap_digests": [ 00:22:28.328 "sha256", 00:22:28.328 "sha384", 00:22:28.328 "sha512" 00:22:28.328 ], 00:22:28.328 "dhchap_dhgroups": [ 00:22:28.328 "null", 00:22:28.328 "ffdhe2048", 00:22:28.328 "ffdhe3072", 00:22:28.328 "ffdhe4096", 00:22:28.328 "ffdhe6144", 00:22:28.328 "ffdhe8192" 00:22:28.328 ] 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_nvme_set_hotplug", 00:22:28.328 "params": { 00:22:28.328 "period_us": 100000, 00:22:28.328 "enable": false 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_malloc_create", 00:22:28.328 "params": { 00:22:28.328 "name": "malloc0", 00:22:28.328 "num_blocks": 8192, 00:22:28.328 "block_size": 4096, 00:22:28.328 "physical_block_size": 4096, 00:22:28.328 "uuid": "c3431bda-2c80-49a8-bcdc-808b3be7943f", 00:22:28.328 "optimal_io_boundary": 0, 00:22:28.328 "md_size": 0, 00:22:28.328 "dif_type": 0, 00:22:28.328 "dif_is_head_of_md": false, 00:22:28.328 "dif_pi_format": 0 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "bdev_wait_for_examine" 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "nbd", 00:22:28.328 "config": [] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "scheduler", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "framework_set_scheduler", 00:22:28.328 "params": { 00:22:28.328 "name": "static" 00:22:28.328 } 00:22:28.328 } 00:22:28.328 ] 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "subsystem": "nvmf", 00:22:28.328 "config": [ 00:22:28.328 { 00:22:28.328 "method": "nvmf_set_config", 00:22:28.328 "params": { 00:22:28.328 "discovery_filter": "match_any", 00:22:28.328 "admin_cmd_passthru": { 00:22:28.328 "identify_ctrlr": false 00:22:28.328 }, 00:22:28.328 "dhchap_digests": [ 00:22:28.328 "sha256", 00:22:28.328 "sha384", 00:22:28.328 "sha512" 00:22:28.328 ], 00:22:28.328 "dhchap_dhgroups": [ 00:22:28.328 "null", 00:22:28.328 "ffdhe2048", 00:22:28.328 "ffdhe3072", 00:22:28.328 "ffdhe4096", 00:22:28.328 "ffdhe6144", 00:22:28.328 "ffdhe8192" 00:22:28.328 ] 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "nvmf_set_max_subsystems", 00:22:28.328 "params": { 00:22:28.328 "max_subsystems": 1024 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "nvmf_set_crdt", 00:22:28.328 "params": { 00:22:28.328 "crdt1": 0, 00:22:28.328 "crdt2": 0, 00:22:28.328 "crdt3": 0 00:22:28.328 } 00:22:28.328 }, 00:22:28.328 { 00:22:28.328 "method": "nvmf_create_transport", 00:22:28.328 "params": { 00:22:28.329 "trtype": "TCP", 00:22:28.329 "max_queue_depth": 128, 00:22:28.329 "max_io_qpairs_per_ctrlr": 127, 00:22:28.329 "in_capsule_data_size": 4096, 00:22:28.329 "max_io_size": 131072, 00:22:28.329 "io_unit_size": 131072, 00:22:28.329 "max_aq_depth": 128, 00:22:28.329 "num_shared_buffers": 511, 00:22:28.329 "buf_cache_size": 4294967295, 00:22:28.329 "dif_insert_or_strip": false, 00:22:28.329 "zcopy": false, 00:22:28.329 "c2h_success": false, 00:22:28.329 "sock_priority": 0, 00:22:28.329 "abort_timeout_sec": 1, 00:22:28.329 "ack_timeout": 0, 00:22:28.329 "data_wr_pool_size": 0 00:22:28.329 } 00:22:28.329 }, 00:22:28.329 { 00:22:28.329 "method": "nvmf_create_subsystem", 00:22:28.329 "params": { 00:22:28.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.329 "allow_any_host": false, 00:22:28.329 "serial_number": "00000000000000000000", 00:22:28.329 "model_number": "SPDK bdev Controller", 00:22:28.329 "max_namespaces": 32, 00:22:28.329 "min_cntlid": 1, 00:22:28.329 "max_cntlid": 65519, 00:22:28.329 "ana_reporting": false 00:22:28.329 } 00:22:28.329 }, 00:22:28.329 { 00:22:28.329 "method": "nvmf_subsystem_add_host", 00:22:28.329 "params": { 00:22:28.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.329 "host": "nqn.2016-06.io.spdk:host1", 00:22:28.329 "psk": "key0" 00:22:28.329 } 00:22:28.329 }, 00:22:28.329 { 00:22:28.329 "method": "nvmf_subsystem_add_ns", 00:22:28.329 "params": { 00:22:28.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.329 "namespace": { 00:22:28.329 "nsid": 1, 00:22:28.329 "bdev_name": "malloc0", 00:22:28.329 "nguid": "C3431BDA2C8049A8BCDC808B3BE7943F", 00:22:28.329 "uuid": "c3431bda-2c80-49a8-bcdc-808b3be7943f", 00:22:28.329 "no_auto_visible": false 00:22:28.329 } 00:22:28.329 } 00:22:28.329 }, 00:22:28.329 { 00:22:28.329 "method": "nvmf_subsystem_add_listener", 00:22:28.329 "params": { 00:22:28.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.329 "listen_address": { 00:22:28.329 "trtype": "TCP", 00:22:28.329 "adrfam": "IPv4", 00:22:28.329 "traddr": "10.0.0.2", 00:22:28.329 "trsvcid": "4420" 00:22:28.329 }, 00:22:28.329 "secure_channel": false, 00:22:28.329 "sock_impl": "ssl" 00:22:28.329 } 00:22:28.329 } 00:22:28.329 ] 00:22:28.329 } 00:22:28.329 ] 00:22:28.329 }' 00:22:28.329 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:28.587 "subsystems": [ 00:22:28.587 { 00:22:28.587 "subsystem": "keyring", 00:22:28.587 "config": [ 00:22:28.587 { 00:22:28.587 "method": "keyring_file_add_key", 00:22:28.587 "params": { 00:22:28.587 "name": "key0", 00:22:28.587 "path": "/tmp/tmp.VU79ajSvIn" 00:22:28.587 } 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "iobuf", 00:22:28.587 "config": [ 00:22:28.587 { 00:22:28.587 "method": "iobuf_set_options", 00:22:28.587 "params": { 00:22:28.587 "small_pool_count": 8192, 00:22:28.587 "large_pool_count": 1024, 00:22:28.587 "small_bufsize": 8192, 00:22:28.587 "large_bufsize": 135168, 00:22:28.587 "enable_numa": false 00:22:28.587 } 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "sock", 00:22:28.587 "config": [ 00:22:28.587 { 00:22:28.587 "method": "sock_set_default_impl", 00:22:28.587 "params": { 00:22:28.587 "impl_name": "posix" 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "sock_impl_set_options", 00:22:28.587 "params": { 00:22:28.587 "impl_name": "ssl", 00:22:28.587 "recv_buf_size": 4096, 00:22:28.587 "send_buf_size": 4096, 00:22:28.587 "enable_recv_pipe": true, 00:22:28.587 "enable_quickack": false, 00:22:28.587 "enable_placement_id": 0, 00:22:28.587 "enable_zerocopy_send_server": true, 00:22:28.587 "enable_zerocopy_send_client": false, 00:22:28.587 "zerocopy_threshold": 0, 00:22:28.587 "tls_version": 0, 00:22:28.587 "enable_ktls": false 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "sock_impl_set_options", 00:22:28.587 "params": { 00:22:28.587 "impl_name": "posix", 00:22:28.587 "recv_buf_size": 2097152, 00:22:28.587 "send_buf_size": 2097152, 00:22:28.587 "enable_recv_pipe": true, 00:22:28.587 "enable_quickack": false, 00:22:28.587 "enable_placement_id": 0, 00:22:28.587 "enable_zerocopy_send_server": true, 00:22:28.587 "enable_zerocopy_send_client": false, 00:22:28.587 "zerocopy_threshold": 0, 00:22:28.587 "tls_version": 0, 00:22:28.587 "enable_ktls": false 00:22:28.587 } 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "vmd", 00:22:28.587 "config": [] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "accel", 00:22:28.587 "config": [ 00:22:28.587 { 00:22:28.587 "method": "accel_set_options", 00:22:28.587 "params": { 00:22:28.587 "small_cache_size": 128, 00:22:28.587 "large_cache_size": 16, 00:22:28.587 "task_count": 2048, 00:22:28.587 "sequence_count": 2048, 00:22:28.587 "buf_count": 2048 00:22:28.587 } 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "bdev", 00:22:28.587 "config": [ 00:22:28.587 { 00:22:28.587 "method": "bdev_set_options", 00:22:28.587 "params": { 00:22:28.587 "bdev_io_pool_size": 65535, 00:22:28.587 "bdev_io_cache_size": 256, 00:22:28.587 "bdev_auto_examine": true, 00:22:28.587 "iobuf_small_cache_size": 128, 00:22:28.587 "iobuf_large_cache_size": 16 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_raid_set_options", 00:22:28.587 "params": { 00:22:28.587 "process_window_size_kb": 1024, 00:22:28.587 "process_max_bandwidth_mb_sec": 0 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_iscsi_set_options", 00:22:28.587 "params": { 00:22:28.587 "timeout_sec": 30 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_nvme_set_options", 00:22:28.587 "params": { 00:22:28.587 "action_on_timeout": "none", 00:22:28.587 "timeout_us": 0, 00:22:28.587 "timeout_admin_us": 0, 00:22:28.587 "keep_alive_timeout_ms": 10000, 00:22:28.587 "arbitration_burst": 0, 00:22:28.587 "low_priority_weight": 0, 00:22:28.587 "medium_priority_weight": 0, 00:22:28.587 "high_priority_weight": 0, 00:22:28.587 "nvme_adminq_poll_period_us": 10000, 00:22:28.587 "nvme_ioq_poll_period_us": 0, 00:22:28.587 "io_queue_requests": 512, 00:22:28.587 "delay_cmd_submit": true, 00:22:28.587 "transport_retry_count": 4, 00:22:28.587 "bdev_retry_count": 3, 00:22:28.587 "transport_ack_timeout": 0, 00:22:28.587 "ctrlr_loss_timeout_sec": 0, 00:22:28.587 "reconnect_delay_sec": 0, 00:22:28.587 "fast_io_fail_timeout_sec": 0, 00:22:28.587 "disable_auto_failback": false, 00:22:28.587 "generate_uuids": false, 00:22:28.587 "transport_tos": 0, 00:22:28.587 "nvme_error_stat": false, 00:22:28.587 "rdma_srq_size": 0, 00:22:28.587 "io_path_stat": false, 00:22:28.587 "allow_accel_sequence": false, 00:22:28.587 "rdma_max_cq_size": 0, 00:22:28.587 "rdma_cm_event_timeout_ms": 0, 00:22:28.587 "dhchap_digests": [ 00:22:28.587 "sha256", 00:22:28.587 "sha384", 00:22:28.587 "sha512" 00:22:28.587 ], 00:22:28.587 "dhchap_dhgroups": [ 00:22:28.587 "null", 00:22:28.587 "ffdhe2048", 00:22:28.587 "ffdhe3072", 00:22:28.587 "ffdhe4096", 00:22:28.587 "ffdhe6144", 00:22:28.587 "ffdhe8192" 00:22:28.587 ] 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_nvme_attach_controller", 00:22:28.587 "params": { 00:22:28.587 "name": "nvme0", 00:22:28.587 "trtype": "TCP", 00:22:28.587 "adrfam": "IPv4", 00:22:28.587 "traddr": "10.0.0.2", 00:22:28.587 "trsvcid": "4420", 00:22:28.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.587 "prchk_reftag": false, 00:22:28.587 "prchk_guard": false, 00:22:28.587 "ctrlr_loss_timeout_sec": 0, 00:22:28.587 "reconnect_delay_sec": 0, 00:22:28.587 "fast_io_fail_timeout_sec": 0, 00:22:28.587 "psk": "key0", 00:22:28.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.587 "hdgst": false, 00:22:28.587 "ddgst": false, 00:22:28.587 "multipath": "multipath" 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_nvme_set_hotplug", 00:22:28.587 "params": { 00:22:28.587 "period_us": 100000, 00:22:28.587 "enable": false 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_enable_histogram", 00:22:28.587 "params": { 00:22:28.587 "name": "nvme0n1", 00:22:28.587 "enable": true 00:22:28.587 } 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "method": "bdev_wait_for_examine" 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }, 00:22:28.587 { 00:22:28.587 "subsystem": "nbd", 00:22:28.587 "config": [] 00:22:28.587 } 00:22:28.587 ] 00:22:28.587 }' 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2684464 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2684464 ']' 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2684464 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684464 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684464' 00:22:28.587 killing process with pid 2684464 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2684464 00:22:28.587 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.587 00:22:28.587 Latency(us) 00:22:28.587 [2024-12-09T09:33:06.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.587 [2024-12-09T09:33:06.311Z] =================================================================================================================== 00:22:28.587 [2024-12-09T09:33:06.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.587 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2684464 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2684434 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2684434 ']' 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2684434 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684434 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684434' 00:22:28.844 killing process with pid 2684434 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2684434 00:22:28.844 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2684434 00:22:29.102 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:29.102 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.102 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.102 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:29.102 "subsystems": [ 00:22:29.102 { 00:22:29.102 "subsystem": "keyring", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "keyring_file_add_key", 00:22:29.102 "params": { 00:22:29.102 "name": "key0", 00:22:29.102 "path": "/tmp/tmp.VU79ajSvIn" 00:22:29.102 } 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "iobuf", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "iobuf_set_options", 00:22:29.102 "params": { 00:22:29.102 "small_pool_count": 8192, 00:22:29.102 "large_pool_count": 1024, 00:22:29.102 "small_bufsize": 8192, 00:22:29.102 "large_bufsize": 135168, 00:22:29.102 "enable_numa": false 00:22:29.102 } 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "sock", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "sock_set_default_impl", 00:22:29.102 "params": { 00:22:29.102 "impl_name": "posix" 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "sock_impl_set_options", 00:22:29.102 "params": { 00:22:29.102 "impl_name": "ssl", 00:22:29.102 "recv_buf_size": 4096, 00:22:29.102 "send_buf_size": 4096, 00:22:29.102 "enable_recv_pipe": true, 00:22:29.102 "enable_quickack": false, 00:22:29.102 "enable_placement_id": 0, 00:22:29.102 "enable_zerocopy_send_server": true, 00:22:29.102 "enable_zerocopy_send_client": false, 00:22:29.102 "zerocopy_threshold": 0, 00:22:29.102 "tls_version": 0, 00:22:29.102 "enable_ktls": false 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "sock_impl_set_options", 00:22:29.102 "params": { 00:22:29.102 "impl_name": "posix", 00:22:29.102 "recv_buf_size": 2097152, 00:22:29.102 "send_buf_size": 2097152, 00:22:29.102 "enable_recv_pipe": true, 00:22:29.102 "enable_quickack": false, 00:22:29.102 "enable_placement_id": 0, 00:22:29.102 "enable_zerocopy_send_server": true, 00:22:29.102 "enable_zerocopy_send_client": false, 00:22:29.102 "zerocopy_threshold": 0, 00:22:29.102 "tls_version": 0, 00:22:29.102 "enable_ktls": false 00:22:29.102 } 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "vmd", 00:22:29.102 "config": [] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "accel", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "accel_set_options", 00:22:29.102 "params": { 00:22:29.102 "small_cache_size": 128, 00:22:29.102 "large_cache_size": 16, 00:22:29.102 "task_count": 2048, 00:22:29.102 "sequence_count": 2048, 00:22:29.102 "buf_count": 2048 00:22:29.102 } 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "bdev", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "bdev_set_options", 00:22:29.102 "params": { 00:22:29.102 "bdev_io_pool_size": 65535, 00:22:29.102 "bdev_io_cache_size": 256, 00:22:29.102 "bdev_auto_examine": true, 00:22:29.102 "iobuf_small_cache_size": 128, 00:22:29.102 "iobuf_large_cache_size": 16 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_raid_set_options", 00:22:29.102 "params": { 00:22:29.102 "process_window_size_kb": 1024, 00:22:29.102 "process_max_bandwidth_mb_sec": 0 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_iscsi_set_options", 00:22:29.102 "params": { 00:22:29.102 "timeout_sec": 30 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_nvme_set_options", 00:22:29.102 "params": { 00:22:29.102 "action_on_timeout": "none", 00:22:29.102 "timeout_us": 0, 00:22:29.102 "timeout_admin_us": 0, 00:22:29.102 "keep_alive_timeout_ms": 10000, 00:22:29.102 "arbitration_burst": 0, 00:22:29.102 "low_priority_weight": 0, 00:22:29.102 "medium_priority_weight": 0, 00:22:29.102 "high_priority_weight": 0, 00:22:29.102 "nvme_adminq_poll_period_us": 10000, 00:22:29.102 "nvme_ioq_poll_period_us": 0, 00:22:29.102 "io_queue_requests": 0, 00:22:29.102 "delay_cmd_submit": true, 00:22:29.102 "transport_retry_count": 4, 00:22:29.102 "bdev_retry_count": 3, 00:22:29.102 "transport_ack_timeout": 0, 00:22:29.102 "ctrlr_loss_timeout_sec": 0, 00:22:29.102 "reconnect_delay_sec": 0, 00:22:29.102 "fast_io_fail_timeout_sec": 0, 00:22:29.102 "disable_auto_failback": false, 00:22:29.102 "generate_uuids": false, 00:22:29.102 "transport_tos": 0, 00:22:29.102 "nvme_error_stat": false, 00:22:29.102 "rdma_srq_size": 0, 00:22:29.102 "io_path_stat": false, 00:22:29.102 "allow_accel_sequence": false, 00:22:29.102 "rdma_max_cq_size": 0, 00:22:29.102 "rdma_cm_event_timeout_ms": 0, 00:22:29.102 "dhchap_digests": [ 00:22:29.102 "sha256", 00:22:29.102 "sha384", 00:22:29.102 "sha512" 00:22:29.102 ], 00:22:29.102 "dhchap_dhgroups": [ 00:22:29.102 "null", 00:22:29.102 "ffdhe2048", 00:22:29.102 "ffdhe3072", 00:22:29.102 "ffdhe4096", 00:22:29.102 "ffdhe6144", 00:22:29.102 "ffdhe8192" 00:22:29.102 ] 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_nvme_set_hotplug", 00:22:29.102 "params": { 00:22:29.102 "period_us": 100000, 00:22:29.102 "enable": false 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_malloc_create", 00:22:29.102 "params": { 00:22:29.102 "name": "malloc0", 00:22:29.102 "num_blocks": 8192, 00:22:29.102 "block_size": 4096, 00:22:29.102 "physical_block_size": 4096, 00:22:29.102 "uuid": "c3431bda-2c80-49a8-bcdc-808b3be7943f", 00:22:29.102 "optimal_io_boundary": 0, 00:22:29.102 "md_size": 0, 00:22:29.102 "dif_type": 0, 00:22:29.102 "dif_is_head_of_md": false, 00:22:29.102 "dif_pi_format": 0 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "bdev_wait_for_examine" 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "nbd", 00:22:29.102 "config": [] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "scheduler", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "framework_set_scheduler", 00:22:29.102 "params": { 00:22:29.102 "name": "static" 00:22:29.102 } 00:22:29.102 } 00:22:29.102 ] 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "subsystem": "nvmf", 00:22:29.102 "config": [ 00:22:29.102 { 00:22:29.102 "method": "nvmf_set_config", 00:22:29.102 "params": { 00:22:29.102 "discovery_filter": "match_any", 00:22:29.102 "admin_cmd_passthru": { 00:22:29.102 "identify_ctrlr": false 00:22:29.102 }, 00:22:29.102 "dhchap_digests": [ 00:22:29.102 "sha256", 00:22:29.102 "sha384", 00:22:29.102 "sha512" 00:22:29.102 ], 00:22:29.102 "dhchap_dhgroups": [ 00:22:29.102 "null", 00:22:29.102 "ffdhe2048", 00:22:29.102 "ffdhe3072", 00:22:29.102 "ffdhe4096", 00:22:29.102 "ffdhe6144", 00:22:29.102 "ffdhe8192" 00:22:29.102 ] 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "nvmf_set_max_subsystems", 00:22:29.102 "params": { 00:22:29.102 "max_subsystems": 1024 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "nvmf_set_crdt", 00:22:29.102 "params": { 00:22:29.102 "crdt1": 0, 00:22:29.102 "crdt2": 0, 00:22:29.102 "crdt3": 0 00:22:29.102 } 00:22:29.102 }, 00:22:29.102 { 00:22:29.102 "method": "nvmf_create_transport", 00:22:29.102 "params": { 00:22:29.102 "trtype": "TCP", 00:22:29.102 "max_queue_depth": 128, 00:22:29.102 "max_io_qpairs_per_ctrlr": 127, 00:22:29.102 "in_capsule_data_size": 4096, 00:22:29.103 "max_io_size": 131072, 00:22:29.103 "io_unit_size": 131072, 00:22:29.103 "max_aq_depth": 128, 00:22:29.103 "num_shared_buffers": 511, 00:22:29.103 "buf_cache_size": 4294967295, 00:22:29.103 "dif_insert_or_strip": false, 00:22:29.103 "zcopy": false, 00:22:29.103 "c2h_success": false, 00:22:29.103 "sock_priority": 0, 00:22:29.103 "abort_timeout_sec": 1, 00:22:29.103 "ack_timeout": 0, 00:22:29.103 "data_wr_pool_size": 0 00:22:29.103 } 00:22:29.103 }, 00:22:29.103 { 00:22:29.103 "method": "nvmf_create_subsystem", 00:22:29.103 "params": { 00:22:29.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.103 "allow_any_host": false, 00:22:29.103 "serial_number": "00000000000000000000", 00:22:29.103 "model_number": "SPDK bdev Controller", 00:22:29.103 "max_namespaces": 32, 00:22:29.103 "min_cntlid": 1, 00:22:29.103 "max_cntlid": 65519, 00:22:29.103 "ana_reporting": false 00:22:29.103 } 00:22:29.103 }, 00:22:29.103 { 00:22:29.103 "method": "nvmf_subsystem_add_host", 00:22:29.103 "params": { 00:22:29.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.103 "host": "nqn.2016-06.io.spdk:host1", 00:22:29.103 "psk": "key0" 00:22:29.103 } 00:22:29.103 }, 00:22:29.103 { 00:22:29.103 "method": "nvmf_subsystem_add_ns", 00:22:29.103 "params": { 00:22:29.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.103 "namespace": { 00:22:29.103 "nsid": 1, 00:22:29.103 "bdev_name": "malloc0", 00:22:29.103 "nguid": "C3431BDA2C8049A8BCDC808B3BE7943F", 00:22:29.103 "uuid": "c3431bda-2c80-49a8-bcdc-808b3be7943f", 00:22:29.103 "no_auto_visible": false 00:22:29.103 } 00:22:29.103 } 00:22:29.103 }, 00:22:29.103 { 00:22:29.103 "method": "nvmf_subsystem_add_listener", 00:22:29.103 "params": { 00:22:29.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.103 "listen_address": { 00:22:29.103 "trtype": "TCP", 00:22:29.103 "adrfam": "IPv4", 00:22:29.103 "traddr": "10.0.0.2", 00:22:29.103 "trsvcid": "4420" 00:22:29.103 }, 00:22:29.103 "secure_channel": false, 00:22:29.103 "sock_impl": "ssl" 00:22:29.103 } 00:22:29.103 } 00:22:29.103 ] 00:22:29.103 } 00:22:29.103 ] 00:22:29.103 }' 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2684931 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2684931 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2684931 ']' 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.103 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.103 [2024-12-09 10:33:06.656390] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:29.103 [2024-12-09 10:33:06.656433] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.103 [2024-12-09 10:33:06.734665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.103 [2024-12-09 10:33:06.775142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.103 [2024-12-09 10:33:06.775176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.103 [2024-12-09 10:33:06.775186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.103 [2024-12-09 10:33:06.775191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.103 [2024-12-09 10:33:06.775196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.103 [2024-12-09 10:33:06.775790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.361 [2024-12-09 10:33:06.989804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.361 [2024-12-09 10:33:07.021831] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.361 [2024-12-09 10:33:07.022043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2685174 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2685174 /var/tmp/bdevperf.sock 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2685174 ']' 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.928 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:29.928 "subsystems": [ 00:22:29.928 { 00:22:29.928 "subsystem": "keyring", 00:22:29.928 "config": [ 00:22:29.928 { 00:22:29.928 "method": "keyring_file_add_key", 00:22:29.928 "params": { 00:22:29.928 "name": "key0", 00:22:29.928 "path": "/tmp/tmp.VU79ajSvIn" 00:22:29.928 } 00:22:29.928 } 00:22:29.928 ] 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "subsystem": "iobuf", 00:22:29.928 "config": [ 00:22:29.928 { 00:22:29.928 "method": "iobuf_set_options", 00:22:29.928 "params": { 00:22:29.928 "small_pool_count": 8192, 00:22:29.928 "large_pool_count": 1024, 00:22:29.928 "small_bufsize": 8192, 00:22:29.928 "large_bufsize": 135168, 00:22:29.928 "enable_numa": false 00:22:29.928 } 00:22:29.928 } 00:22:29.928 ] 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "subsystem": "sock", 00:22:29.928 "config": [ 00:22:29.928 { 00:22:29.928 "method": "sock_set_default_impl", 00:22:29.928 "params": { 00:22:29.928 "impl_name": "posix" 00:22:29.928 } 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "method": "sock_impl_set_options", 00:22:29.928 "params": { 00:22:29.928 "impl_name": "ssl", 00:22:29.928 "recv_buf_size": 4096, 00:22:29.928 "send_buf_size": 4096, 00:22:29.928 "enable_recv_pipe": true, 00:22:29.928 "enable_quickack": false, 00:22:29.928 "enable_placement_id": 0, 00:22:29.928 "enable_zerocopy_send_server": true, 00:22:29.928 "enable_zerocopy_send_client": false, 00:22:29.928 "zerocopy_threshold": 0, 00:22:29.928 "tls_version": 0, 00:22:29.928 "enable_ktls": false 00:22:29.928 } 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "method": "sock_impl_set_options", 00:22:29.928 "params": { 00:22:29.928 "impl_name": "posix", 00:22:29.928 "recv_buf_size": 2097152, 00:22:29.928 "send_buf_size": 2097152, 00:22:29.928 "enable_recv_pipe": true, 00:22:29.928 "enable_quickack": false, 00:22:29.928 "enable_placement_id": 0, 00:22:29.928 "enable_zerocopy_send_server": true, 00:22:29.928 "enable_zerocopy_send_client": false, 00:22:29.928 "zerocopy_threshold": 0, 00:22:29.928 "tls_version": 0, 00:22:29.928 "enable_ktls": false 00:22:29.928 } 00:22:29.928 } 00:22:29.928 ] 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "subsystem": "vmd", 00:22:29.928 "config": [] 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "subsystem": "accel", 00:22:29.928 "config": [ 00:22:29.928 { 00:22:29.928 "method": "accel_set_options", 00:22:29.928 "params": { 00:22:29.928 "small_cache_size": 128, 00:22:29.928 "large_cache_size": 16, 00:22:29.928 "task_count": 2048, 00:22:29.928 "sequence_count": 2048, 00:22:29.928 "buf_count": 2048 00:22:29.928 } 00:22:29.928 } 00:22:29.928 ] 00:22:29.928 }, 00:22:29.928 { 00:22:29.928 "subsystem": "bdev", 00:22:29.928 "config": [ 00:22:29.928 { 00:22:29.928 "method": "bdev_set_options", 00:22:29.928 "params": { 00:22:29.928 "bdev_io_pool_size": 65535, 00:22:29.928 "bdev_io_cache_size": 256, 00:22:29.928 "bdev_auto_examine": true, 00:22:29.928 "iobuf_small_cache_size": 128, 00:22:29.928 "iobuf_large_cache_size": 16 00:22:29.928 } 00:22:29.928 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_raid_set_options", 00:22:29.929 "params": { 00:22:29.929 "process_window_size_kb": 1024, 00:22:29.929 "process_max_bandwidth_mb_sec": 0 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_iscsi_set_options", 00:22:29.929 "params": { 00:22:29.929 "timeout_sec": 30 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_nvme_set_options", 00:22:29.929 "params": { 00:22:29.929 "action_on_timeout": "none", 00:22:29.929 "timeout_us": 0, 00:22:29.929 "timeout_admin_us": 0, 00:22:29.929 "keep_alive_timeout_ms": 10000, 00:22:29.929 "arbitration_burst": 0, 00:22:29.929 "low_priority_weight": 0, 00:22:29.929 "medium_priority_weight": 0, 00:22:29.929 "high_priority_weight": 0, 00:22:29.929 "nvme_adminq_poll_period_us": 10000, 00:22:29.929 "nvme_ioq_poll_period_us": 0, 00:22:29.929 "io_queue_requests": 512, 00:22:29.929 "delay_cmd_submit": true, 00:22:29.929 "transport_retry_count": 4, 00:22:29.929 "bdev_retry_count": 3, 00:22:29.929 "transport_ack_timeout": 0, 00:22:29.929 "ctrlr_loss_timeout_sec": 0, 00:22:29.929 "reconnect_delay_sec": 0, 00:22:29.929 "fast_io_fail_timeout_sec": 0, 00:22:29.929 "disable_auto_failback": false, 00:22:29.929 "generate_uuids": false, 00:22:29.929 "transport_tos": 0, 00:22:29.929 "nvme_error_stat": false, 00:22:29.929 "rdma_srq_size": 0, 00:22:29.929 "io_path_stat": false, 00:22:29.929 "allow_accel_sequence": false, 00:22:29.929 "rdma_max_cq_size": 0, 00:22:29.929 "rdma_cm_event_timeout_ms": 0, 00:22:29.929 "dhchap_digests": [ 00:22:29.929 "sha256", 00:22:29.929 "sha384", 00:22:29.929 "sha512" 00:22:29.929 ], 00:22:29.929 "dhchap_dhgroups": [ 00:22:29.929 "null", 00:22:29.929 "ffdhe2048", 00:22:29.929 "ffdhe3072", 00:22:29.929 "ffdhe4096", 00:22:29.929 "ffdhe6144", 00:22:29.929 "ffdhe8192" 00:22:29.929 ] 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_nvme_attach_controller", 00:22:29.929 "params": { 00:22:29.929 "name": "nvme0", 00:22:29.929 "trtype": "TCP", 00:22:29.929 "adrfam": "IPv4", 00:22:29.929 "traddr": "10.0.0.2", 00:22:29.929 "trsvcid": "4420", 00:22:29.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.929 "prchk_reftag": false, 00:22:29.929 "prchk_guard": false, 00:22:29.929 "ctrlr_loss_timeout_sec": 0, 00:22:29.929 "reconnect_delay_sec": 0, 00:22:29.929 "fast_io_fail_timeout_sec": 0, 00:22:29.929 "psk": "key0", 00:22:29.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.929 "hdgst": false, 00:22:29.929 "ddgst": false, 00:22:29.929 "multipath": "multipath" 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_nvme_set_hotplug", 00:22:29.929 "params": { 00:22:29.929 "period_us": 100000, 00:22:29.929 "enable": false 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_enable_histogram", 00:22:29.929 "params": { 00:22:29.929 "name": "nvme0n1", 00:22:29.929 "enable": true 00:22:29.929 } 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "method": "bdev_wait_for_examine" 00:22:29.929 } 00:22:29.929 ] 00:22:29.929 }, 00:22:29.929 { 00:22:29.929 "subsystem": "nbd", 00:22:29.929 "config": [] 00:22:29.929 } 00:22:29.929 ] 00:22:29.929 }' 00:22:29.929 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.929 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.929 [2024-12-09 10:33:07.577185] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:29.929 [2024-12-09 10:33:07.577231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685174 ] 00:22:30.188 [2024-12-09 10:33:07.651824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.188 [2024-12-09 10:33:07.691702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.188 [2024-12-09 10:33:07.844341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.754 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.754 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:30.755 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.755 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:31.013 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.013 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.273 Running I/O for 1 seconds... 00:22:32.233 5534.00 IOPS, 21.62 MiB/s 00:22:32.233 Latency(us) 00:22:32.233 [2024-12-09T09:33:09.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:32.233 Verification LBA range: start 0x0 length 0x2000 00:22:32.233 nvme0n1 : 1.02 5568.37 21.75 0.00 0.00 22815.14 5086.84 20472.20 00:22:32.233 [2024-12-09T09:33:09.957Z] =================================================================================================================== 00:22:32.233 [2024-12-09T09:33:09.957Z] Total : 5568.37 21.75 0.00 0.00 22815.14 5086.84 20472.20 00:22:32.233 { 00:22:32.233 "results": [ 00:22:32.233 { 00:22:32.233 "job": "nvme0n1", 00:22:32.233 "core_mask": "0x2", 00:22:32.233 "workload": "verify", 00:22:32.233 "status": "finished", 00:22:32.233 "verify_range": { 00:22:32.233 "start": 0, 00:22:32.233 "length": 8192 00:22:32.233 }, 00:22:32.233 "queue_depth": 128, 00:22:32.233 "io_size": 4096, 00:22:32.233 "runtime": 1.016815, 00:22:32.233 "iops": 5568.367893864665, 00:22:32.233 "mibps": 21.75143708540885, 00:22:32.233 "io_failed": 0, 00:22:32.233 "io_timeout": 0, 00:22:32.233 "avg_latency_us": 22815.139583186152, 00:22:32.233 "min_latency_us": 5086.8419047619045, 00:22:32.233 "max_latency_us": 20472.198095238095 00:22:32.233 } 00:22:32.233 ], 00:22:32.233 "core_count": 1 00:22:32.233 } 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:32.233 nvmf_trace.0 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2685174 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2685174 ']' 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2685174 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685174 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685174' 00:22:32.233 killing process with pid 2685174 00:22:32.233 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2685174 00:22:32.233 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.233 00:22:32.233 Latency(us) 00:22:32.233 [2024-12-09T09:33:09.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.233 [2024-12-09T09:33:09.957Z] =================================================================================================================== 00:22:32.233 [2024-12-09T09:33:09.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.234 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2685174 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.492 rmmod nvme_tcp 00:22:32.492 rmmod nvme_fabrics 00:22:32.492 rmmod nvme_keyring 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.492 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2684931 ']' 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2684931 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2684931 ']' 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2684931 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684931 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684931' 00:22:32.493 killing process with pid 2684931 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2684931 00:22:32.493 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2684931 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.752 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.C73KmllEIT /tmp/tmp.21yExxzu4j /tmp/tmp.VU79ajSvIn 00:22:35.289 00:22:35.289 real 1m19.406s 00:22:35.289 user 2m1.289s 00:22:35.289 sys 0m30.717s 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 ************************************ 00:22:35.289 END TEST nvmf_tls 00:22:35.289 ************************************ 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.289 ************************************ 00:22:35.289 START TEST nvmf_fips 00:22:35.289 ************************************ 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:35.289 * Looking for test storage... 00:22:35.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:35.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.289 --rc genhtml_branch_coverage=1 00:22:35.289 --rc genhtml_function_coverage=1 00:22:35.289 --rc genhtml_legend=1 00:22:35.289 --rc geninfo_all_blocks=1 00:22:35.289 --rc geninfo_unexecuted_blocks=1 00:22:35.289 00:22:35.289 ' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:35.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.289 --rc genhtml_branch_coverage=1 00:22:35.289 --rc genhtml_function_coverage=1 00:22:35.289 --rc genhtml_legend=1 00:22:35.289 --rc geninfo_all_blocks=1 00:22:35.289 --rc geninfo_unexecuted_blocks=1 00:22:35.289 00:22:35.289 ' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:35.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.289 --rc genhtml_branch_coverage=1 00:22:35.289 --rc genhtml_function_coverage=1 00:22:35.289 --rc genhtml_legend=1 00:22:35.289 --rc geninfo_all_blocks=1 00:22:35.289 --rc geninfo_unexecuted_blocks=1 00:22:35.289 00:22:35.289 ' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:35.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.289 --rc genhtml_branch_coverage=1 00:22:35.289 --rc genhtml_function_coverage=1 00:22:35.289 --rc genhtml_legend=1 00:22:35.289 --rc geninfo_all_blocks=1 00:22:35.289 --rc geninfo_unexecuted_blocks=1 00:22:35.289 00:22:35.289 ' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.289 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:35.290 Error setting digest 00:22:35.290 40220C5F257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:35.290 40220C5F257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.290 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.291 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.852 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:41.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:41.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:41.853 Found net devices under 0000:86:00.0: cvl_0_0 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:41.853 Found net devices under 0000:86:00.1: cvl_0_1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:22:41.853 00:22:41.853 --- 10.0.0.2 ping statistics --- 00:22:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.853 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:22:41.853 00:22:41.853 --- 10.0.0.1 ping statistics --- 00:22:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.853 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2689588 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2689588 00:22:41.853 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2689588 ']' 00:22:41.854 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.854 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.854 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.854 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.854 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.854 [2024-12-09 10:33:18.945541] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:41.854 [2024-12-09 10:33:18.945593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.854 [2024-12-09 10:33:19.023176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.854 [2024-12-09 10:33:19.063775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.854 [2024-12-09 10:33:19.063813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.854 [2024-12-09 10:33:19.063821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.854 [2024-12-09 10:33:19.063826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.854 [2024-12-09 10:33:19.063832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.854 [2024-12-09 10:33:19.064393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.uf6 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.uf6 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.uf6 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.uf6 00:22:42.113 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:42.371 [2024-12-09 10:33:19.982628] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.371 [2024-12-09 10:33:19.998631] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.371 [2024-12-09 10:33:19.998835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.371 malloc0 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2689785 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2689785 /var/tmp/bdevperf.sock 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2689785 ']' 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.371 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.630 [2024-12-09 10:33:20.128226] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:42.631 [2024-12-09 10:33:20.128284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689785 ] 00:22:42.631 [2024-12-09 10:33:20.204599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.631 [2024-12-09 10:33:20.246124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.567 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.567 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:43.567 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.uf6 00:22:43.567 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.826 [2024-12-09 10:33:21.316108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.826 TLSTESTn1 00:22:43.826 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.826 Running I/O for 10 seconds... 00:22:46.151 5489.00 IOPS, 21.44 MiB/s [2024-12-09T09:33:24.808Z] 5558.50 IOPS, 21.71 MiB/s [2024-12-09T09:33:25.518Z] 5568.67 IOPS, 21.75 MiB/s [2024-12-09T09:33:26.907Z] 5557.50 IOPS, 21.71 MiB/s [2024-12-09T09:33:27.844Z] 5555.00 IOPS, 21.70 MiB/s [2024-12-09T09:33:28.802Z] 5395.17 IOPS, 21.07 MiB/s [2024-12-09T09:33:29.738Z] 5264.43 IOPS, 20.56 MiB/s [2024-12-09T09:33:30.673Z] 5169.38 IOPS, 20.19 MiB/s [2024-12-09T09:33:31.606Z] 5087.11 IOPS, 19.87 MiB/s [2024-12-09T09:33:31.606Z] 5025.80 IOPS, 19.63 MiB/s 00:22:53.882 Latency(us) 00:22:53.882 [2024-12-09T09:33:31.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.882 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:53.882 Verification LBA range: start 0x0 length 0x2000 00:22:53.882 TLSTESTn1 : 10.02 5027.67 19.64 0.00 0.00 25416.25 5305.30 22719.15 00:22:53.882 [2024-12-09T09:33:31.606Z] =================================================================================================================== 00:22:53.882 [2024-12-09T09:33:31.606Z] Total : 5027.67 19.64 0.00 0.00 25416.25 5305.30 22719.15 00:22:53.882 { 00:22:53.882 "results": [ 00:22:53.882 { 00:22:53.882 "job": "TLSTESTn1", 00:22:53.882 "core_mask": "0x4", 00:22:53.882 "workload": "verify", 00:22:53.882 "status": "finished", 00:22:53.882 "verify_range": { 00:22:53.882 "start": 0, 00:22:53.882 "length": 8192 00:22:53.882 }, 00:22:53.882 "queue_depth": 128, 00:22:53.882 "io_size": 4096, 00:22:53.882 "runtime": 10.021139, 00:22:53.882 "iops": 5027.672004150427, 00:22:53.882 "mibps": 19.639343766212605, 00:22:53.882 "io_failed": 0, 00:22:53.882 "io_timeout": 0, 00:22:53.882 "avg_latency_us": 25416.2538231811, 00:22:53.882 "min_latency_us": 5305.295238095238, 00:22:53.882 "max_latency_us": 22719.146666666667 00:22:53.882 } 00:22:53.882 ], 00:22:53.882 "core_count": 1 00:22:53.882 } 00:22:53.882 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:53.882 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:53.882 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:53.882 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:53.883 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:53.883 nvmf_trace.0 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2689785 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2689785 ']' 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2689785 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689785 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689785' 00:22:54.140 killing process with pid 2689785 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2689785 00:22:54.140 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.140 00:22:54.140 Latency(us) 00:22:54.140 [2024-12-09T09:33:31.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.140 [2024-12-09T09:33:31.864Z] =================================================================================================================== 00:22:54.140 [2024-12-09T09:33:31.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.140 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2689785 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.397 rmmod nvme_tcp 00:22:54.397 rmmod nvme_fabrics 00:22:54.397 rmmod nvme_keyring 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2689588 ']' 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2689588 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2689588 ']' 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2689588 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689588 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689588' 00:22:54.397 killing process with pid 2689588 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2689588 00:22:54.397 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2689588 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.655 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.uf6 00:22:56.601 00:22:56.601 real 0m21.721s 00:22:56.601 user 0m22.937s 00:22:56.601 sys 0m10.206s 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:56.601 ************************************ 00:22:56.601 END TEST nvmf_fips 00:22:56.601 ************************************ 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:56.601 ************************************ 00:22:56.601 START TEST nvmf_control_msg_list 00:22:56.601 ************************************ 00:22:56.601 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:56.859 * Looking for test storage... 00:22:56.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:56.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.859 --rc genhtml_branch_coverage=1 00:22:56.859 --rc genhtml_function_coverage=1 00:22:56.859 --rc genhtml_legend=1 00:22:56.859 --rc geninfo_all_blocks=1 00:22:56.859 --rc geninfo_unexecuted_blocks=1 00:22:56.859 00:22:56.859 ' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:56.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.859 --rc genhtml_branch_coverage=1 00:22:56.859 --rc genhtml_function_coverage=1 00:22:56.859 --rc genhtml_legend=1 00:22:56.859 --rc geninfo_all_blocks=1 00:22:56.859 --rc geninfo_unexecuted_blocks=1 00:22:56.859 00:22:56.859 ' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:56.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.859 --rc genhtml_branch_coverage=1 00:22:56.859 --rc genhtml_function_coverage=1 00:22:56.859 --rc genhtml_legend=1 00:22:56.859 --rc geninfo_all_blocks=1 00:22:56.859 --rc geninfo_unexecuted_blocks=1 00:22:56.859 00:22:56.859 ' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:56.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.859 --rc genhtml_branch_coverage=1 00:22:56.859 --rc genhtml_function_coverage=1 00:22:56.859 --rc genhtml_legend=1 00:22:56.859 --rc geninfo_all_blocks=1 00:22:56.859 --rc geninfo_unexecuted_blocks=1 00:22:56.859 00:22:56.859 ' 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.859 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.860 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:03.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.425 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:03.426 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:03.426 Found net devices under 0000:86:00.0: cvl_0_0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:03.426 Found net devices under 0000:86:00.1: cvl_0_1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:23:03.426 00:23:03.426 --- 10.0.0.2 ping statistics --- 00:23:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.426 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:23:03.426 00:23:03.426 --- 10.0.0.1 ping statistics --- 00:23:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.426 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2695215 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2695215 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2695215 ']' 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.426 [2024-12-09 10:33:40.500444] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:03.426 [2024-12-09 10:33:40.500495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.426 [2024-12-09 10:33:40.580910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.426 [2024-12-09 10:33:40.622977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.426 [2024-12-09 10:33:40.623012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.426 [2024-12-09 10:33:40.623019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.426 [2024-12-09 10:33:40.623026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.426 [2024-12-09 10:33:40.623031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.426 [2024-12-09 10:33:40.623595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.426 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 [2024-12-09 10:33:40.760773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 Malloc0 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.427 [2024-12-09 10:33:40.813141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2695239 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2695240 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2695241 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2695239 00:23:03.427 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:03.427 [2024-12-09 10:33:40.893588] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:03.427 [2024-12-09 10:33:40.903662] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:03.427 [2024-12-09 10:33:40.903821] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.359 Initializing NVMe Controllers 00:23:04.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:04.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:04.359 Initialization complete. Launching workers. 00:23:04.359 ======================================================== 00:23:04.359 Latency(us) 00:23:04.359 Device Information : IOPS MiB/s Average min max 00:23:04.359 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6318.00 24.68 157.94 129.78 403.97 00:23:04.359 ======================================================== 00:23:04.359 Total : 6318.00 24.68 157.94 129.78 403.97 00:23:04.359 00:23:04.359 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2695240 00:23:04.359 Initializing NVMe Controllers 00:23:04.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:04.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:04.359 Initialization complete. Launching workers. 00:23:04.359 ======================================================== 00:23:04.359 Latency(us) 00:23:04.359 Device Information : IOPS MiB/s Average min max 00:23:04.359 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40878.11 40247.77 41005.49 00:23:04.359 ======================================================== 00:23:04.359 Total : 25.00 0.10 40878.11 40247.77 41005.49 00:23:04.359 00:23:04.617 Initializing NVMe Controllers 00:23:04.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:04.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:04.617 Initialization complete. Launching workers. 00:23:04.617 ======================================================== 00:23:04.617 Latency(us) 00:23:04.617 Device Information : IOPS MiB/s Average min max 00:23:04.617 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6512.00 25.44 153.22 118.48 336.68 00:23:04.617 ======================================================== 00:23:04.617 Total : 6512.00 25.44 153.22 118.48 336.68 00:23:04.617 00:23:04.617 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2695241 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.618 rmmod nvme_tcp 00:23:04.618 rmmod nvme_fabrics 00:23:04.618 rmmod nvme_keyring 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2695215 ']' 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2695215 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2695215 ']' 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2695215 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2695215 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2695215' 00:23:04.618 killing process with pid 2695215 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2695215 00:23:04.618 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2695215 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.876 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.781 00:23:06.781 real 0m10.135s 00:23:06.781 user 0m6.611s 00:23:06.781 sys 0m5.523s 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 ************************************ 00:23:06.781 END TEST nvmf_control_msg_list 00:23:06.781 ************************************ 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.781 10:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.040 ************************************ 00:23:07.040 START TEST nvmf_wait_for_buf 00:23:07.040 ************************************ 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:07.040 * Looking for test storage... 00:23:07.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:07.040 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:07.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.041 --rc genhtml_branch_coverage=1 00:23:07.041 --rc genhtml_function_coverage=1 00:23:07.041 --rc genhtml_legend=1 00:23:07.041 --rc geninfo_all_blocks=1 00:23:07.041 --rc geninfo_unexecuted_blocks=1 00:23:07.041 00:23:07.041 ' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:07.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.041 --rc genhtml_branch_coverage=1 00:23:07.041 --rc genhtml_function_coverage=1 00:23:07.041 --rc genhtml_legend=1 00:23:07.041 --rc geninfo_all_blocks=1 00:23:07.041 --rc geninfo_unexecuted_blocks=1 00:23:07.041 00:23:07.041 ' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:07.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.041 --rc genhtml_branch_coverage=1 00:23:07.041 --rc genhtml_function_coverage=1 00:23:07.041 --rc genhtml_legend=1 00:23:07.041 --rc geninfo_all_blocks=1 00:23:07.041 --rc geninfo_unexecuted_blocks=1 00:23:07.041 00:23:07.041 ' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:07.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.041 --rc genhtml_branch_coverage=1 00:23:07.041 --rc genhtml_function_coverage=1 00:23:07.041 --rc genhtml_legend=1 00:23:07.041 --rc geninfo_all_blocks=1 00:23:07.041 --rc geninfo_unexecuted_blocks=1 00:23:07.041 00:23:07.041 ' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.041 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.612 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.612 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.612 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.612 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.612 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:23:13.612 00:23:13.612 --- 10.0.0.2 ping statistics --- 00:23:13.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.613 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:23:13.613 00:23:13.613 --- 10.0.0.1 ping statistics --- 00:23:13.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.613 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2699002 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2699002 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2699002 ']' 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.613 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:13.613 [2024-12-09 10:33:50.746511] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:13.613 [2024-12-09 10:33:50.746552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.613 [2024-12-09 10:33:50.826335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.613 [2024-12-09 10:33:50.867774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.613 [2024-12-09 10:33:50.867813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.613 [2024-12-09 10:33:50.867821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.613 [2024-12-09 10:33:50.867827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.613 [2024-12-09 10:33:50.867832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.613 [2024-12-09 10:33:50.868402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.871 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.871 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:13.871 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.871 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.871 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.129 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 Malloc0 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 [2024-12-09 10:33:51.728642] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:14.130 [2024-12-09 10:33:51.756831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.130 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:14.130 [2024-12-09 10:33:51.838881] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:15.504 Initializing NVMe Controllers 00:23:15.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:15.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:15.504 Initialization complete. Launching workers. 00:23:15.504 ======================================================== 00:23:15.504 Latency(us) 00:23:15.504 Device Information : IOPS MiB/s Average min max 00:23:15.504 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32270.68 7276.51 63875.89 00:23:15.504 ======================================================== 00:23:15.504 Total : 129.00 16.12 32270.68 7276.51 63875.89 00:23:15.504 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.504 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.504 rmmod nvme_tcp 00:23:15.504 rmmod nvme_fabrics 00:23:15.763 rmmod nvme_keyring 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2699002 ']' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2699002 ']' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699002' 00:23:15.763 killing process with pid 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2699002 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.763 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.298 00:23:18.298 real 0m11.018s 00:23:18.298 user 0m4.662s 00:23:18.298 sys 0m4.969s 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:18.298 ************************************ 00:23:18.298 END TEST nvmf_wait_for_buf 00:23:18.298 ************************************ 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.298 10:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:23.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:23.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:23.576 Found net devices under 0000:86:00.0: cvl_0_0 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:23.576 Found net devices under 0000:86:00.1: cvl_0_1 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.576 ************************************ 00:23:23.576 START TEST nvmf_perf_adq 00:23:23.576 ************************************ 00:23:23.576 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:23.835 * Looking for test storage... 00:23:23.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.835 --rc genhtml_branch_coverage=1 00:23:23.835 --rc genhtml_function_coverage=1 00:23:23.835 --rc genhtml_legend=1 00:23:23.835 --rc geninfo_all_blocks=1 00:23:23.835 --rc geninfo_unexecuted_blocks=1 00:23:23.835 00:23:23.835 ' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.835 --rc genhtml_branch_coverage=1 00:23:23.835 --rc genhtml_function_coverage=1 00:23:23.835 --rc genhtml_legend=1 00:23:23.835 --rc geninfo_all_blocks=1 00:23:23.835 --rc geninfo_unexecuted_blocks=1 00:23:23.835 00:23:23.835 ' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.835 --rc genhtml_branch_coverage=1 00:23:23.835 --rc genhtml_function_coverage=1 00:23:23.835 --rc genhtml_legend=1 00:23:23.835 --rc geninfo_all_blocks=1 00:23:23.835 --rc geninfo_unexecuted_blocks=1 00:23:23.835 00:23:23.835 ' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.835 --rc genhtml_branch_coverage=1 00:23:23.835 --rc genhtml_function_coverage=1 00:23:23.835 --rc genhtml_legend=1 00:23:23.835 --rc geninfo_all_blocks=1 00:23:23.835 --rc geninfo_unexecuted_blocks=1 00:23:23.835 00:23:23.835 ' 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.835 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.836 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.398 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.398 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.399 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.399 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:30.399 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:30.658 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:33.195 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.469 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:38.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:38.470 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:38.470 Found net devices under 0000:86:00.0: cvl_0_0 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:38.470 Found net devices under 0000:86:00.1: cvl_0_1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:23:38.470 00:23:38.470 --- 10.0.0.2 ping statistics --- 00:23:38.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.470 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:23:38.470 00:23:38.470 --- 10.0.0.1 ping statistics --- 00:23:38.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.470 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2707346 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2707346 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2707346 ']' 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.470 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.470 [2024-12-09 10:34:15.672365] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:38.470 [2024-12-09 10:34:15.672405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.470 [2024-12-09 10:34:15.749619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.470 [2024-12-09 10:34:15.795863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.470 [2024-12-09 10:34:15.795891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.470 [2024-12-09 10:34:15.795898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.470 [2024-12-09 10:34:15.795905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.470 [2024-12-09 10:34:15.795910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.471 [2024-12-09 10:34:15.797269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.471 [2024-12-09 10:34:15.797377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.471 [2024-12-09 10:34:15.797484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.471 [2024-12-09 10:34:15.797485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.037 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 [2024-12-09 10:34:16.682349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 Malloc1 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.038 [2024-12-09 10:34:16.739877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2707596 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:39.038 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:41.563 "tick_rate": 2100000000, 00:23:41.563 "poll_groups": [ 00:23:41.563 { 00:23:41.563 "name": "nvmf_tgt_poll_group_000", 00:23:41.563 "admin_qpairs": 1, 00:23:41.563 "io_qpairs": 1, 00:23:41.563 "current_admin_qpairs": 1, 00:23:41.563 "current_io_qpairs": 1, 00:23:41.563 "pending_bdev_io": 0, 00:23:41.563 "completed_nvme_io": 20077, 00:23:41.563 "transports": [ 00:23:41.563 { 00:23:41.563 "trtype": "TCP" 00:23:41.563 } 00:23:41.563 ] 00:23:41.563 }, 00:23:41.563 { 00:23:41.563 "name": "nvmf_tgt_poll_group_001", 00:23:41.563 "admin_qpairs": 0, 00:23:41.563 "io_qpairs": 1, 00:23:41.563 "current_admin_qpairs": 0, 00:23:41.563 "current_io_qpairs": 1, 00:23:41.563 "pending_bdev_io": 0, 00:23:41.563 "completed_nvme_io": 20117, 00:23:41.563 "transports": [ 00:23:41.563 { 00:23:41.563 "trtype": "TCP" 00:23:41.563 } 00:23:41.563 ] 00:23:41.563 }, 00:23:41.563 { 00:23:41.563 "name": "nvmf_tgt_poll_group_002", 00:23:41.563 "admin_qpairs": 0, 00:23:41.563 "io_qpairs": 1, 00:23:41.563 "current_admin_qpairs": 0, 00:23:41.563 "current_io_qpairs": 1, 00:23:41.563 "pending_bdev_io": 0, 00:23:41.563 "completed_nvme_io": 20171, 00:23:41.563 "transports": [ 00:23:41.563 { 00:23:41.563 "trtype": "TCP" 00:23:41.563 } 00:23:41.563 ] 00:23:41.563 }, 00:23:41.563 { 00:23:41.563 "name": "nvmf_tgt_poll_group_003", 00:23:41.563 "admin_qpairs": 0, 00:23:41.563 "io_qpairs": 1, 00:23:41.563 "current_admin_qpairs": 0, 00:23:41.563 "current_io_qpairs": 1, 00:23:41.563 "pending_bdev_io": 0, 00:23:41.563 "completed_nvme_io": 19707, 00:23:41.563 "transports": [ 00:23:41.563 { 00:23:41.563 "trtype": "TCP" 00:23:41.563 } 00:23:41.563 ] 00:23:41.563 } 00:23:41.563 ] 00:23:41.563 }' 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:41.563 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2707596 00:23:49.667 Initializing NVMe Controllers 00:23:49.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:49.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:49.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:49.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:49.667 Initialization complete. Launching workers. 00:23:49.667 ======================================================== 00:23:49.667 Latency(us) 00:23:49.668 Device Information : IOPS MiB/s Average min max 00:23:49.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10741.30 41.96 5958.19 1592.04 10422.78 00:23:49.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10694.10 41.77 5984.64 2315.10 10047.40 00:23:49.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10514.60 41.07 6087.14 2068.05 10341.38 00:23:49.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10650.00 41.60 6008.32 2243.82 13528.92 00:23:49.668 ======================================================== 00:23:49.668 Total : 42600.00 166.41 6009.19 1592.04 13528.92 00:23:49.668 00:23:49.668 [2024-12-09 10:34:26.895677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3dc0 is same with the state(6) to be set 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.668 rmmod nvme_tcp 00:23:49.668 rmmod nvme_fabrics 00:23:49.668 rmmod nvme_keyring 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2707346 ']' 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2707346 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2707346 ']' 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2707346 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.668 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2707346 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2707346' 00:23:49.668 killing process with pid 2707346 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2707346 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2707346 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.668 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.572 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.572 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:51.572 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:51.572 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:52.948 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:54.853 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.126 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:00.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:00.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:00.127 Found net devices under 0000:86:00.0: cvl_0_0 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:00.127 Found net devices under 0000:86:00.1: cvl_0_1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:24:00.127 00:24:00.127 --- 10.0.0.2 ping statistics --- 00:24:00.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.127 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:00.127 00:24:00.127 --- 10.0.0.1 ping statistics --- 00:24:00.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.127 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:00.127 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:00.385 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:00.385 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:00.385 net.core.busy_poll = 1 00:24:00.385 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:00.385 net.core.busy_read = 1 00:24:00.385 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:00.385 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:00.385 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:00.385 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:00.385 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2711377 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2711377 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2711377 ']' 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.386 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.643 [2024-12-09 10:34:38.127308] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:00.643 [2024-12-09 10:34:38.127354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.643 [2024-12-09 10:34:38.205379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.643 [2024-12-09 10:34:38.251806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.643 [2024-12-09 10:34:38.251844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.643 [2024-12-09 10:34:38.251854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.643 [2024-12-09 10:34:38.251860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.643 [2024-12-09 10:34:38.251865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.643 [2024-12-09 10:34:38.253392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.643 [2024-12-09 10:34:38.253500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.643 [2024-12-09 10:34:38.253605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.643 [2024-12-09 10:34:38.253605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.574 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.574 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:01.574 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.574 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.574 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:01.574 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 [2024-12-09 10:34:39.146517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 Malloc1 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.575 [2024-12-09 10:34:39.205690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2711625 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:01.575 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:04.100 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:04.100 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.100 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.100 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.100 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:04.100 "tick_rate": 2100000000, 00:24:04.100 "poll_groups": [ 00:24:04.100 { 00:24:04.100 "name": "nvmf_tgt_poll_group_000", 00:24:04.100 "admin_qpairs": 1, 00:24:04.100 "io_qpairs": 2, 00:24:04.100 "current_admin_qpairs": 1, 00:24:04.100 "current_io_qpairs": 2, 00:24:04.100 "pending_bdev_io": 0, 00:24:04.100 "completed_nvme_io": 28315, 00:24:04.100 "transports": [ 00:24:04.100 { 00:24:04.100 "trtype": "TCP" 00:24:04.100 } 00:24:04.100 ] 00:24:04.100 }, 00:24:04.100 { 00:24:04.100 "name": "nvmf_tgt_poll_group_001", 00:24:04.100 "admin_qpairs": 0, 00:24:04.100 "io_qpairs": 2, 00:24:04.100 "current_admin_qpairs": 0, 00:24:04.100 "current_io_qpairs": 2, 00:24:04.100 "pending_bdev_io": 0, 00:24:04.100 "completed_nvme_io": 28821, 00:24:04.100 "transports": [ 00:24:04.100 { 00:24:04.100 "trtype": "TCP" 00:24:04.100 } 00:24:04.100 ] 00:24:04.100 }, 00:24:04.100 { 00:24:04.100 "name": "nvmf_tgt_poll_group_002", 00:24:04.100 "admin_qpairs": 0, 00:24:04.100 "io_qpairs": 0, 00:24:04.100 "current_admin_qpairs": 0, 00:24:04.101 "current_io_qpairs": 0, 00:24:04.101 "pending_bdev_io": 0, 00:24:04.101 "completed_nvme_io": 0, 00:24:04.101 "transports": [ 00:24:04.101 { 00:24:04.101 "trtype": "TCP" 00:24:04.101 } 00:24:04.101 ] 00:24:04.101 }, 00:24:04.101 { 00:24:04.101 "name": "nvmf_tgt_poll_group_003", 00:24:04.101 "admin_qpairs": 0, 00:24:04.101 "io_qpairs": 0, 00:24:04.101 "current_admin_qpairs": 0, 00:24:04.101 "current_io_qpairs": 0, 00:24:04.101 "pending_bdev_io": 0, 00:24:04.101 "completed_nvme_io": 0, 00:24:04.101 "transports": [ 00:24:04.101 { 00:24:04.101 "trtype": "TCP" 00:24:04.101 } 00:24:04.101 ] 00:24:04.101 } 00:24:04.101 ] 00:24:04.101 }' 00:24:04.101 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:04.101 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:04.101 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:04.101 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:04.101 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2711625 00:24:12.217 Initializing NVMe Controllers 00:24:12.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:12.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:12.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:12.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:12.217 Initialization complete. Launching workers. 00:24:12.217 ======================================================== 00:24:12.217 Latency(us) 00:24:12.217 Device Information : IOPS MiB/s Average min max 00:24:12.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7190.56 28.09 8900.09 1022.22 53605.96 00:24:12.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7976.76 31.16 8035.18 1549.51 52703.74 00:24:12.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7045.76 27.52 9082.67 1538.91 53574.03 00:24:12.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7947.66 31.05 8054.45 1480.66 52341.21 00:24:12.217 ======================================================== 00:24:12.217 Total : 30160.73 117.82 8491.16 1022.22 53605.96 00:24:12.217 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.217 rmmod nvme_tcp 00:24:12.217 rmmod nvme_fabrics 00:24:12.217 rmmod nvme_keyring 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2711377 ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2711377 ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2711377' 00:24:12.217 killing process with pid 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2711377 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.217 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:14.122 00:24:14.122 real 0m50.544s 00:24:14.122 user 2m49.635s 00:24:14.122 sys 0m10.252s 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.122 ************************************ 00:24:14.122 END TEST nvmf_perf_adq 00:24:14.122 ************************************ 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.122 10:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 ************************************ 00:24:14.383 START TEST nvmf_shutdown 00:24:14.383 ************************************ 00:24:14.383 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.383 * Looking for test storage... 00:24:14.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:14.383 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:14.383 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:14.383 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.383 --rc genhtml_branch_coverage=1 00:24:14.383 --rc genhtml_function_coverage=1 00:24:14.383 --rc genhtml_legend=1 00:24:14.383 --rc geninfo_all_blocks=1 00:24:14.383 --rc geninfo_unexecuted_blocks=1 00:24:14.383 00:24:14.383 ' 00:24:14.383 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.383 --rc genhtml_branch_coverage=1 00:24:14.383 --rc genhtml_function_coverage=1 00:24:14.383 --rc genhtml_legend=1 00:24:14.383 --rc geninfo_all_blocks=1 00:24:14.383 --rc geninfo_unexecuted_blocks=1 00:24:14.383 00:24:14.383 ' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.384 --rc geninfo_all_blocks=1 00:24:14.384 --rc geninfo_unexecuted_blocks=1 00:24:14.384 00:24:14.384 ' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.384 --rc geninfo_all_blocks=1 00:24:14.384 --rc geninfo_unexecuted_blocks=1 00:24:14.384 00:24:14.384 ' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:14.384 ************************************ 00:24:14.384 START TEST nvmf_shutdown_tc1 00:24:14.384 ************************************ 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.384 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:20.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:20.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:20.956 Found net devices under 0000:86:00.0: cvl_0_0 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:20.956 Found net devices under 0000:86:00.1: cvl_0_1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.956 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:24:20.956 00:24:20.956 --- 10.0.0.2 ping statistics --- 00:24:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.956 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:20.956 00:24:20.956 --- 10.0.0.1 ping statistics --- 00:24:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.956 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2716855 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2716855 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2716855 ']' 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.956 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.956 [2024-12-09 10:34:58.138402] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:20.956 [2024-12-09 10:34:58.138445] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.956 [2024-12-09 10:34:58.218314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.956 [2024-12-09 10:34:58.258492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.956 [2024-12-09 10:34:58.258528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.956 [2024-12-09 10:34:58.258535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.956 [2024-12-09 10:34:58.258541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.956 [2024-12-09 10:34:58.258546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.956 [2024-12-09 10:34:58.260115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.956 [2024-12-09 10:34:58.260224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.956 [2024-12-09 10:34:58.260325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.956 [2024-12-09 10:34:58.260326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:21.520 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.520 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:21.520 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.520 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.520 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:21.520 [2024-12-09 10:34:59.020604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.520 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.521 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:21.521 Malloc1 00:24:21.521 [2024-12-09 10:34:59.142145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.521 Malloc2 00:24:21.521 Malloc3 00:24:21.777 Malloc4 00:24:21.777 Malloc5 00:24:21.777 Malloc6 00:24:21.777 Malloc7 00:24:21.777 Malloc8 00:24:21.777 Malloc9 00:24:22.035 Malloc10 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2717134 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2717134 /var/tmp/bdevperf.sock 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2717134 ']' 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.035 { 00:24:22.035 "params": { 00:24:22.035 "name": "Nvme$subsystem", 00:24:22.035 "trtype": "$TEST_TRANSPORT", 00:24:22.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.035 "adrfam": "ipv4", 00:24:22.035 "trsvcid": "$NVMF_PORT", 00:24:22.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.035 "hdgst": ${hdgst:-false}, 00:24:22.035 "ddgst": ${ddgst:-false} 00:24:22.035 }, 00:24:22.035 "method": "bdev_nvme_attach_controller" 00:24:22.035 } 00:24:22.035 EOF 00:24:22.035 )") 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.035 { 00:24:22.035 "params": { 00:24:22.035 "name": "Nvme$subsystem", 00:24:22.035 "trtype": "$TEST_TRANSPORT", 00:24:22.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.035 "adrfam": "ipv4", 00:24:22.035 "trsvcid": "$NVMF_PORT", 00:24:22.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.035 "hdgst": ${hdgst:-false}, 00:24:22.035 "ddgst": ${ddgst:-false} 00:24:22.035 }, 00:24:22.035 "method": "bdev_nvme_attach_controller" 00:24:22.035 } 00:24:22.035 EOF 00:24:22.035 )") 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.035 { 00:24:22.035 "params": { 00:24:22.035 "name": "Nvme$subsystem", 00:24:22.035 "trtype": "$TEST_TRANSPORT", 00:24:22.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.035 "adrfam": "ipv4", 00:24:22.035 "trsvcid": "$NVMF_PORT", 00:24:22.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.035 "hdgst": ${hdgst:-false}, 00:24:22.035 "ddgst": ${ddgst:-false} 00:24:22.035 }, 00:24:22.035 "method": "bdev_nvme_attach_controller" 00:24:22.035 } 00:24:22.035 EOF 00:24:22.035 )") 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.035 { 00:24:22.035 "params": { 00:24:22.035 "name": "Nvme$subsystem", 00:24:22.035 "trtype": "$TEST_TRANSPORT", 00:24:22.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.035 "adrfam": "ipv4", 00:24:22.035 "trsvcid": "$NVMF_PORT", 00:24:22.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.035 "hdgst": ${hdgst:-false}, 00:24:22.035 "ddgst": ${ddgst:-false} 00:24:22.035 }, 00:24:22.035 "method": "bdev_nvme_attach_controller" 00:24:22.035 } 00:24:22.035 EOF 00:24:22.035 )") 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.035 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 [2024-12-09 10:34:59.616279] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:22.036 [2024-12-09 10:34:59.616329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:22.036 { 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme$subsystem", 00:24:22.036 "trtype": "$TEST_TRANSPORT", 00:24:22.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "$NVMF_PORT", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.036 "hdgst": ${hdgst:-false}, 00:24:22.036 "ddgst": ${ddgst:-false} 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 } 00:24:22.036 EOF 00:24:22.036 )") 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:22.036 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme1", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme2", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme3", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme4", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme5", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme6", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme7", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme8", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.036 "adrfam": "ipv4", 00:24:22.036 "trsvcid": "4420", 00:24:22.036 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:22.036 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:22.036 "hdgst": false, 00:24:22.036 "ddgst": false 00:24:22.036 }, 00:24:22.036 "method": "bdev_nvme_attach_controller" 00:24:22.036 },{ 00:24:22.036 "params": { 00:24:22.036 "name": "Nvme9", 00:24:22.036 "trtype": "tcp", 00:24:22.036 "traddr": "10.0.0.2", 00:24:22.037 "adrfam": "ipv4", 00:24:22.037 "trsvcid": "4420", 00:24:22.037 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:22.037 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:22.037 "hdgst": false, 00:24:22.037 "ddgst": false 00:24:22.037 }, 00:24:22.037 "method": "bdev_nvme_attach_controller" 00:24:22.037 },{ 00:24:22.037 "params": { 00:24:22.037 "name": "Nvme10", 00:24:22.037 "trtype": "tcp", 00:24:22.037 "traddr": "10.0.0.2", 00:24:22.037 "adrfam": "ipv4", 00:24:22.037 "trsvcid": "4420", 00:24:22.037 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:22.037 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:22.037 "hdgst": false, 00:24:22.037 "ddgst": false 00:24:22.037 }, 00:24:22.037 "method": "bdev_nvme_attach_controller" 00:24:22.037 }' 00:24:22.037 [2024-12-09 10:34:59.696385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.037 [2024-12-09 10:34:59.737024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2717134 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:23.932 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:24.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2717134 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2716855 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:24.865 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 [2024-12-09 10:35:02.542512] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:24.866 [2024-12-09 10:35:02.542564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717623 ] 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.866 { 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme$subsystem", 00:24:24.866 "trtype": "$TEST_TRANSPORT", 00:24:24.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.866 "adrfam": "ipv4", 00:24:24.866 "trsvcid": "$NVMF_PORT", 00:24:24.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.866 "hdgst": ${hdgst:-false}, 00:24:24.866 "ddgst": ${ddgst:-false} 00:24:24.866 }, 00:24:24.866 "method": "bdev_nvme_attach_controller" 00:24:24.866 } 00:24:24.866 EOF 00:24:24.866 )") 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:24.866 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:24.866 "params": { 00:24:24.866 "name": "Nvme1", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme2", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme3", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme4", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme5", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme6", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme7", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme8", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme9", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 },{ 00:24:24.867 "params": { 00:24:24.867 "name": "Nvme10", 00:24:24.867 "trtype": "tcp", 00:24:24.867 "traddr": "10.0.0.2", 00:24:24.867 "adrfam": "ipv4", 00:24:24.867 "trsvcid": "4420", 00:24:24.867 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:24.867 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:24.867 "hdgst": false, 00:24:24.867 "ddgst": false 00:24:24.867 }, 00:24:24.867 "method": "bdev_nvme_attach_controller" 00:24:24.867 }' 00:24:25.124 [2024-12-09 10:35:02.619785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.125 [2024-12-09 10:35:02.660647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.496 Running I/O for 1 seconds... 00:24:27.461 2245.00 IOPS, 140.31 MiB/s 00:24:27.461 Latency(us) 00:24:27.461 [2024-12-09T09:35:05.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.461 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme1n1 : 1.16 276.45 17.28 0.00 0.00 229618.25 21720.50 215707.06 00:24:27.461 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme2n1 : 1.16 274.84 17.18 0.00 0.00 227876.96 14293.09 215707.06 00:24:27.461 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme3n1 : 1.13 282.17 17.64 0.00 0.00 218590.06 14979.66 215707.06 00:24:27.461 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme4n1 : 1.14 280.39 17.52 0.00 0.00 216841.17 12857.54 212711.13 00:24:27.461 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme5n1 : 1.15 281.95 17.62 0.00 0.00 212401.07 4088.20 218702.99 00:24:27.461 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme6n1 : 1.17 273.44 17.09 0.00 0.00 216649.73 21970.16 228689.43 00:24:27.461 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme7n1 : 1.17 274.67 17.17 0.00 0.00 211783.29 26089.57 209715.20 00:24:27.461 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme8n1 : 1.16 275.35 17.21 0.00 0.00 209022.39 13668.94 213709.78 00:24:27.461 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme9n1 : 1.17 272.99 17.06 0.00 0.00 207890.72 15978.30 224694.86 00:24:27.461 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.461 Verification LBA range: start 0x0 length 0x400 00:24:27.461 Nvme10n1 : 1.18 272.30 17.02 0.00 0.00 205273.67 15728.64 234681.30 00:24:27.461 [2024-12-09T09:35:05.185Z] =================================================================================================================== 00:24:27.461 [2024-12-09T09:35:05.185Z] Total : 2764.56 172.78 0.00 0.00 215589.75 4088.20 234681.30 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.754 rmmod nvme_tcp 00:24:27.754 rmmod nvme_fabrics 00:24:27.754 rmmod nvme_keyring 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2716855 ']' 00:24:27.754 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2716855 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2716855 ']' 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2716855 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716855 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716855' 00:24:27.755 killing process with pid 2716855 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2716855 00:24:27.755 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2716855 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.022 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.554 00:24:30.554 real 0m15.647s 00:24:30.554 user 0m35.435s 00:24:30.554 sys 0m5.808s 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:30.554 ************************************ 00:24:30.554 END TEST nvmf_shutdown_tc1 00:24:30.554 ************************************ 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:30.554 ************************************ 00:24:30.554 START TEST nvmf_shutdown_tc2 00:24:30.554 ************************************ 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.554 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.555 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.555 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.555 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.555 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.555 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.555 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:24:30.556 00:24:30.556 --- 10.0.0.2 ping statistics --- 00:24:30.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.556 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:24:30.556 00:24:30.556 --- 10.0.0.1 ping statistics --- 00:24:30.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.556 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2718651 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2718651 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2718651 ']' 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.556 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.556 [2024-12-09 10:35:08.184892] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:30.556 [2024-12-09 10:35:08.184938] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.556 [2024-12-09 10:35:08.264102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.813 [2024-12-09 10:35:08.305775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.813 [2024-12-09 10:35:08.305811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.813 [2024-12-09 10:35:08.305819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.813 [2024-12-09 10:35:08.305825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.813 [2024-12-09 10:35:08.305847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.813 [2024-12-09 10:35:08.307326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.813 [2024-12-09 10:35:08.307436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.813 [2024-12-09 10:35:08.307544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.813 [2024-12-09 10:35:08.307544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.377 [2024-12-09 10:35:09.075500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.377 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.378 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.634 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.634 Malloc1 00:24:31.634 [2024-12-09 10:35:09.180450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.634 Malloc2 00:24:31.634 Malloc3 00:24:31.634 Malloc4 00:24:31.634 Malloc5 00:24:31.890 Malloc6 00:24:31.890 Malloc7 00:24:31.890 Malloc8 00:24:31.890 Malloc9 00:24:31.890 Malloc10 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2718936 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2718936 /var/tmp/bdevperf.sock 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2718936 ']' 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:31.890 { 00:24:31.890 "params": { 00:24:31.890 "name": "Nvme$subsystem", 00:24:31.890 "trtype": "$TEST_TRANSPORT", 00:24:31.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.890 "adrfam": "ipv4", 00:24:31.890 "trsvcid": "$NVMF_PORT", 00:24:31.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.890 "hdgst": ${hdgst:-false}, 00:24:31.890 "ddgst": ${ddgst:-false} 00:24:31.890 }, 00:24:31.890 "method": "bdev_nvme_attach_controller" 00:24:31.890 } 00:24:31.890 EOF 00:24:31.890 )") 00:24:31.890 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.147 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.147 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.147 { 00:24:32.147 "params": { 00:24:32.147 "name": "Nvme$subsystem", 00:24:32.147 "trtype": "$TEST_TRANSPORT", 00:24:32.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.147 "adrfam": "ipv4", 00:24:32.147 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 [2024-12-09 10:35:09.652146] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:32.148 [2024-12-09 10:35:09.652197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718936 ] 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.148 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.148 { 00:24:32.148 "params": { 00:24:32.148 "name": "Nvme$subsystem", 00:24:32.148 "trtype": "$TEST_TRANSPORT", 00:24:32.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.148 "adrfam": "ipv4", 00:24:32.148 "trsvcid": "$NVMF_PORT", 00:24:32.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.148 "hdgst": ${hdgst:-false}, 00:24:32.148 "ddgst": ${ddgst:-false} 00:24:32.148 }, 00:24:32.148 "method": "bdev_nvme_attach_controller" 00:24:32.148 } 00:24:32.148 EOF 00:24:32.148 )") 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.149 { 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme$subsystem", 00:24:32.149 "trtype": "$TEST_TRANSPORT", 00:24:32.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "$NVMF_PORT", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.149 "hdgst": ${hdgst:-false}, 00:24:32.149 "ddgst": ${ddgst:-false} 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 } 00:24:32.149 EOF 00:24:32.149 )") 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:32.149 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme1", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme2", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme3", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme4", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme5", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme6", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme7", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme8", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme9", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 },{ 00:24:32.149 "params": { 00:24:32.149 "name": "Nvme10", 00:24:32.149 "trtype": "tcp", 00:24:32.149 "traddr": "10.0.0.2", 00:24:32.149 "adrfam": "ipv4", 00:24:32.149 "trsvcid": "4420", 00:24:32.149 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:32.149 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:32.149 "hdgst": false, 00:24:32.149 "ddgst": false 00:24:32.149 }, 00:24:32.149 "method": "bdev_nvme_attach_controller" 00:24:32.149 }' 00:24:32.149 [2024-12-09 10:35:09.734027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.149 [2024-12-09 10:35:09.774880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.040 Running I/O for 10 seconds... 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:34.040 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:34.297 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2718936 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2718936 ']' 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2718936 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.555 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718936 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718936' 00:24:34.813 killing process with pid 2718936 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2718936 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2718936 00:24:34.813 Received shutdown signal, test time was about 0.919971 seconds 00:24:34.813 00:24:34.813 Latency(us) 00:24:34.813 [2024-12-09T09:35:12.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme1n1 : 0.91 280.69 17.54 0.00 0.00 225443.35 33204.91 215707.06 00:24:34.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme2n1 : 0.91 287.14 17.95 0.00 0.00 216051.07 4743.56 214708.42 00:24:34.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme3n1 : 0.89 291.33 18.21 0.00 0.00 208925.05 5804.62 201726.05 00:24:34.813 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme4n1 : 0.89 290.57 18.16 0.00 0.00 205604.92 3666.90 214708.42 00:24:34.813 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme5n1 : 0.91 282.51 17.66 0.00 0.00 208509.44 15728.64 215707.06 00:24:34.813 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme6n1 : 0.90 289.04 18.06 0.00 0.00 199666.05 3027.14 210713.84 00:24:34.813 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme7n1 : 0.91 281.41 17.59 0.00 0.00 201796.51 16602.45 200727.41 00:24:34.813 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme8n1 : 0.92 283.88 17.74 0.00 0.00 196379.03 608.55 218702.99 00:24:34.813 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme9n1 : 0.92 278.46 17.40 0.00 0.00 196662.86 15791.06 219701.64 00:24:34.813 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:34.813 Verification LBA range: start 0x0 length 0x400 00:24:34.813 Nvme10n1 : 0.88 218.07 13.63 0.00 0.00 244223.67 15978.30 239674.51 00:24:34.813 [2024-12-09T09:35:12.537Z] =================================================================================================================== 00:24:34.813 [2024-12-09T09:35:12.537Z] Total : 2783.10 173.94 0.00 0.00 209424.18 608.55 239674.51 00:24:34.813 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.184 rmmod nvme_tcp 00:24:36.184 rmmod nvme_fabrics 00:24:36.184 rmmod nvme_keyring 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2718651 ']' 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2718651 ']' 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718651' 00:24:36.184 killing process with pid 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2718651 00:24:36.184 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2718651 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.443 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.998 00:24:38.998 real 0m8.310s 00:24:38.998 user 0m25.890s 00:24:38.998 sys 0m1.381s 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.998 ************************************ 00:24:38.998 END TEST nvmf_shutdown_tc2 00:24:38.998 ************************************ 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:38.998 ************************************ 00:24:38.998 START TEST nvmf_shutdown_tc3 00:24:38.998 ************************************ 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.998 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.999 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.999 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.999 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.999 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.999 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:39.000 00:24:39.000 --- 10.0.0.2 ping statistics --- 00:24:39.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.000 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:39.000 00:24:39.000 --- 10.0.0.1 ping statistics --- 00:24:39.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.000 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2720206 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2720206 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2720206 ']' 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.000 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.000 [2024-12-09 10:35:16.559060] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:39.000 [2024-12-09 10:35:16.559101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.000 [2024-12-09 10:35:16.637222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.000 [2024-12-09 10:35:16.678843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.000 [2024-12-09 10:35:16.678882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.000 [2024-12-09 10:35:16.678888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.000 [2024-12-09 10:35:16.678895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.000 [2024-12-09 10:35:16.678900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.000 [2024-12-09 10:35:16.680354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.000 [2024-12-09 10:35:16.680464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.000 [2024-12-09 10:35:16.680570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.000 [2024-12-09 10:35:16.680571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.257 [2024-12-09 10:35:16.813721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.257 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.257 Malloc1 00:24:39.257 [2024-12-09 10:35:16.917033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.257 Malloc2 00:24:39.257 Malloc3 00:24:39.513 Malloc4 00:24:39.513 Malloc5 00:24:39.513 Malloc6 00:24:39.513 Malloc7 00:24:39.513 Malloc8 00:24:39.770 Malloc9 00:24:39.770 Malloc10 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2720299 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2720299 /var/tmp/bdevperf.sock 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2720299 ']' 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.770 }, 00:24:39.770 "method": "bdev_nvme_attach_controller" 00:24:39.770 } 00:24:39.770 EOF 00:24:39.770 )") 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.770 }, 00:24:39.770 "method": "bdev_nvme_attach_controller" 00:24:39.770 } 00:24:39.770 EOF 00:24:39.770 )") 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.770 }, 00:24:39.770 "method": "bdev_nvme_attach_controller" 00:24:39.770 } 00:24:39.770 EOF 00:24:39.770 )") 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.770 }, 00:24:39.770 "method": "bdev_nvme_attach_controller" 00:24:39.770 } 00:24:39.770 EOF 00:24:39.770 )") 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.770 }, 00:24:39.770 "method": "bdev_nvme_attach_controller" 00:24:39.770 } 00:24:39.770 EOF 00:24:39.770 )") 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.770 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.770 { 00:24:39.770 "params": { 00:24:39.770 "name": "Nvme$subsystem", 00:24:39.770 "trtype": "$TEST_TRANSPORT", 00:24:39.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.770 "adrfam": "ipv4", 00:24:39.770 "trsvcid": "$NVMF_PORT", 00:24:39.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.770 "hdgst": ${hdgst:-false}, 00:24:39.770 "ddgst": ${ddgst:-false} 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 } 00:24:39.771 EOF 00:24:39.771 )") 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.771 { 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme$subsystem", 00:24:39.771 "trtype": "$TEST_TRANSPORT", 00:24:39.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "$NVMF_PORT", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.771 "hdgst": ${hdgst:-false}, 00:24:39.771 "ddgst": ${ddgst:-false} 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 } 00:24:39.771 EOF 00:24:39.771 )") 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.771 [2024-12-09 10:35:17.394035] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:39.771 [2024-12-09 10:35:17.394084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720299 ] 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.771 { 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme$subsystem", 00:24:39.771 "trtype": "$TEST_TRANSPORT", 00:24:39.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "$NVMF_PORT", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.771 "hdgst": ${hdgst:-false}, 00:24:39.771 "ddgst": ${ddgst:-false} 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 } 00:24:39.771 EOF 00:24:39.771 )") 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.771 { 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme$subsystem", 00:24:39.771 "trtype": "$TEST_TRANSPORT", 00:24:39.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "$NVMF_PORT", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.771 "hdgst": ${hdgst:-false}, 00:24:39.771 "ddgst": ${ddgst:-false} 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 } 00:24:39.771 EOF 00:24:39.771 )") 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.771 { 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme$subsystem", 00:24:39.771 "trtype": "$TEST_TRANSPORT", 00:24:39.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "$NVMF_PORT", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.771 "hdgst": ${hdgst:-false}, 00:24:39.771 "ddgst": ${ddgst:-false} 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 } 00:24:39.771 EOF 00:24:39.771 )") 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:39.771 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme1", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.771 "hdgst": false, 00:24:39.771 "ddgst": false 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 },{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme2", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.771 "hdgst": false, 00:24:39.771 "ddgst": false 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 },{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme3", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:39.771 "hdgst": false, 00:24:39.771 "ddgst": false 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 },{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme4", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:39.771 "hdgst": false, 00:24:39.771 "ddgst": false 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 },{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme5", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:39.771 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:39.771 "hdgst": false, 00:24:39.771 "ddgst": false 00:24:39.771 }, 00:24:39.771 "method": "bdev_nvme_attach_controller" 00:24:39.771 },{ 00:24:39.771 "params": { 00:24:39.771 "name": "Nvme6", 00:24:39.771 "trtype": "tcp", 00:24:39.771 "traddr": "10.0.0.2", 00:24:39.771 "adrfam": "ipv4", 00:24:39.771 "trsvcid": "4420", 00:24:39.771 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:39.772 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:39.772 "hdgst": false, 00:24:39.772 "ddgst": false 00:24:39.772 }, 00:24:39.772 "method": "bdev_nvme_attach_controller" 00:24:39.772 },{ 00:24:39.772 "params": { 00:24:39.772 "name": "Nvme7", 00:24:39.772 "trtype": "tcp", 00:24:39.772 "traddr": "10.0.0.2", 00:24:39.772 "adrfam": "ipv4", 00:24:39.772 "trsvcid": "4420", 00:24:39.772 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:39.772 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:39.772 "hdgst": false, 00:24:39.772 "ddgst": false 00:24:39.772 }, 00:24:39.772 "method": "bdev_nvme_attach_controller" 00:24:39.772 },{ 00:24:39.772 "params": { 00:24:39.772 "name": "Nvme8", 00:24:39.772 "trtype": "tcp", 00:24:39.772 "traddr": "10.0.0.2", 00:24:39.772 "adrfam": "ipv4", 00:24:39.772 "trsvcid": "4420", 00:24:39.772 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:39.772 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:39.772 "hdgst": false, 00:24:39.772 "ddgst": false 00:24:39.772 }, 00:24:39.772 "method": "bdev_nvme_attach_controller" 00:24:39.772 },{ 00:24:39.772 "params": { 00:24:39.772 "name": "Nvme9", 00:24:39.772 "trtype": "tcp", 00:24:39.772 "traddr": "10.0.0.2", 00:24:39.772 "adrfam": "ipv4", 00:24:39.772 "trsvcid": "4420", 00:24:39.772 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:39.772 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:39.772 "hdgst": false, 00:24:39.772 "ddgst": false 00:24:39.772 }, 00:24:39.772 "method": "bdev_nvme_attach_controller" 00:24:39.772 },{ 00:24:39.772 "params": { 00:24:39.772 "name": "Nvme10", 00:24:39.772 "trtype": "tcp", 00:24:39.772 "traddr": "10.0.0.2", 00:24:39.772 "adrfam": "ipv4", 00:24:39.772 "trsvcid": "4420", 00:24:39.772 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:39.772 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:39.772 "hdgst": false, 00:24:39.772 "ddgst": false 00:24:39.772 }, 00:24:39.772 "method": "bdev_nvme_attach_controller" 00:24:39.772 }' 00:24:39.772 [2024-12-09 10:35:17.472050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.027 [2024-12-09 10:35:17.513482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.393 Running I/O for 10 seconds... 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:41.651 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.908 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2720206 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2720206 ']' 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2720206 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720206 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720206' 00:24:42.181 killing process with pid 2720206 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2720206 00:24:42.181 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2720206 00:24:42.181 [2024-12-09 10:35:19.721635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.181 [2024-12-09 10:35:19.721926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.721998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.722091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832ac0 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.182 [2024-12-09 10:35:19.723557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.723563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.723569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.723575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.723581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.723587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6e30 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.724520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24b20 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.724663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.183 [2024-12-09 10:35:19.724714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.183 [2024-12-09 10:35:19.724720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cadd0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.725998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.183 [2024-12-09 10:35:19.726114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.726120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.726126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.726133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.726139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832fb0 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.727999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.728005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.728011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833480 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.184 [2024-12-09 10:35:19.729145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.729442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833970 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.185 [2024-12-09 10:35:19.730572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.730795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e40 is same with the state(6) to be set 00:24:42.186 [2024-12-09 10:35:19.731779] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.186 [2024-12-09 10:35:19.731845] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.186 [2024-12-09 10:35:19.732295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.186 [2024-12-09 10:35:19.732613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.186 [2024-12-09 10:35:19.732619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-12-09 10:35:19.732911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:35:19.732961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.732989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.732994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.732996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.733007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.733014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.733021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.733028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.733036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:35:19.733043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.733058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.733065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.187 [2024-12-09 10:35:19.733072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.187 [2024-12-09 10:35:19.733077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.187 [2024-12-09 10:35:19.733079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with [2024-12-09 10:35:19.733086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12the state(6) to be set 00:24:42.188 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with [2024-12-09 10:35:19.733137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12the state(6) to be set 00:24:42.188 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-12-09 10:35:19.733220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:35:19.733229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-12-09 10:35:19.733258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with [2024-12-09 10:35:19.733275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12the state(6) to be set 00:24:42.188 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with [2024-12-09 10:35:19.733311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12the state(6) to be set 00:24:42.188 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.188 [2024-12-09 10:35:19.733352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347e0 is same with the state(6) to be set 00:24:42.188 [2024-12-09 10:35:19.733775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.188 [2024-12-09 10:35:19.733860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.188 [2024-12-09 10:35:19.733866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.733999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.189 [2024-12-09 10:35:19.734428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.189 [2024-12-09 10:35:19.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.190 [2024-12-09 10:35:19.734726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.734733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaefd80 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.734849] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.190 [2024-12-09 10:35:19.736046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:42.190 [2024-12-09 10:35:19.736100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f5830 (9): Bad file descriptor 00:24:42.190 [2024-12-09 10:35:19.736126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca940 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.736210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.736238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.190 [2024-12-09 10:35:19.736245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.190 [2024-12-09 10:35:19.738266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.190 [2024-12-09 10:35:19.738342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.738602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834cd0 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.191 [2024-12-09 10:35:19.739367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.739516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6960 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ef530 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24b20 (9): Bad file descriptor 00:24:42.192 [2024-12-09 10:35:19.750533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1eb20 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa386d0 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4df610 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cadd0 (9): Bad file descriptor 00:24:42.192 [2024-12-09 10:35:19.750882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.750957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bee10 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.750985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.750997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.751006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.751024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.751033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.751041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.192 [2024-12-09 10:35:19.751050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.192 [2024-12-09 10:35:19.751059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6930 is same with the state(6) to be set 00:24:42.192 [2024-12-09 10:35:19.752360] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.192 [2024-12-09 10:35:19.752696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:42.193 [2024-12-09 10:35:19.752770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca940 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ef530 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1eb20 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa386d0 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4df610 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bee10 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.752904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6930 (9): Bad file descriptor 00:24:42.193 [2024-12-09 10:35:19.753007] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.193 [2024-12-09 10:35:19.753767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.193 [2024-12-09 10:35:19.753794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f5830 with addr=10.0.0.2, port=4420 00:24:42.193 [2024-12-09 10:35:19.753820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f5830 is same with the state(6) to be set 00:24:42.193 [2024-12-09 10:35:19.754002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.193 [2024-12-09 10:35:19.754015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cadd0 with addr=10.0.0.2, port=4420 00:24:42.193 [2024-12-09 10:35:19.754024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cadd0 is same with the state(6) to be set 00:24:42.193 [2024-12-09 10:35:19.754375] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.193 [2024-12-09 10:35:19.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.754988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.754999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.193 [2024-12-09 10:35:19.755241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.193 [2024-12-09 10:35:19.755252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.194 [2024-12-09 10:35:19.755865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.194 [2024-12-09 10:35:19.755875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae92c0 is same with the state(6) to be set 00:24:42.194 [2024-12-09 10:35:19.757150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:42.194 [2024-12-09 10:35:19.757180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f5830 (9): Bad file descriptor 00:24:42.194 [2024-12-09 10:35:19.757193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cadd0 (9): Bad file descriptor 00:24:42.194 [2024-12-09 10:35:19.757314] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.194 [2024-12-09 10:35:19.757650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.194 [2024-12-09 10:35:19.757669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24b20 with addr=10.0.0.2, port=4420 00:24:42.194 [2024-12-09 10:35:19.757679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24b20 is same with the state(6) to be set 00:24:42.194 [2024-12-09 10:35:19.757689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:42.194 [2024-12-09 10:35:19.757698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:42.194 [2024-12-09 10:35:19.757708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:42.194 [2024-12-09 10:35:19.757718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:42.194 [2024-12-09 10:35:19.757728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:42.194 [2024-12-09 10:35:19.757736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:42.194 [2024-12-09 10:35:19.757746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:42.194 [2024-12-09 10:35:19.757753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:42.194 [2024-12-09 10:35:19.757856] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:42.195 [2024-12-09 10:35:19.758158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24b20 (9): Bad file descriptor 00:24:42.195 [2024-12-09 10:35:19.758217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.195 [2024-12-09 10:35:19.758862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.195 [2024-12-09 10:35:19.758871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.758998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.759472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.759482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19193b0 is same with the state(6) to be set 00:24:42.196 [2024-12-09 10:35:19.759583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:42.196 [2024-12-09 10:35:19.759594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:42.196 [2024-12-09 10:35:19.759603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:42.196 [2024-12-09 10:35:19.759611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:42.196 [2024-12-09 10:35:19.760737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:42.196 [2024-12-09 10:35:19.761030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.196 [2024-12-09 10:35:19.761045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa386d0 with addr=10.0.0.2, port=4420 00:24:42.196 [2024-12-09 10:35:19.761053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa386d0 is same with the state(6) to be set 00:24:42.196 [2024-12-09 10:35:19.761327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa386d0 (9): Bad file descriptor 00:24:42.196 [2024-12-09 10:35:19.761379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:42.196 [2024-12-09 10:35:19.761388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:42.196 [2024-12-09 10:35:19.761395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:42.196 [2024-12-09 10:35:19.761403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:42.196 [2024-12-09 10:35:19.762846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.762862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.196 [2024-12-09 10:35:19.762875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.196 [2024-12-09 10:35:19.762883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.762988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.762997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.197 [2024-12-09 10:35:19.763507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.197 [2024-12-09 10:35:19.763515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.763918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0db0 is same with the state(6) to be set 00:24:42.198 [2024-12-09 10:35:19.765014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.198 [2024-12-09 10:35:19.765247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.198 [2024-12-09 10:35:19.765255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.199 [2024-12-09 10:35:19.765844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.199 [2024-12-09 10:35:19.765851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.765991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.765998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.766014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.766030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.766046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.766062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.766078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.766086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf980 is same with the state(6) to be set 00:24:42.200 [2024-12-09 10:35:19.767185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.200 [2024-12-09 10:35:19.767600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.200 [2024-12-09 10:35:19.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.767990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.767997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.201 [2024-12-09 10:35:19.768237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.201 [2024-12-09 10:35:19.768245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0b80 is same with the state(6) to be set 00:24:42.201 [2024-12-09 10:35:19.769329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.769987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.202 [2024-12-09 10:35:19.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.202 [2024-12-09 10:35:19.770006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.770419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.770427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d0310 is same with the state(6) to be set 00:24:42.203 [2024-12-09 10:35:19.771392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.203 [2024-12-09 10:35:19.771610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.203 [2024-12-09 10:35:19.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.771992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.771999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.204 [2024-12-09 10:35:19.772154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.204 [2024-12-09 10:35:19.772160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.772351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.772359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d28e0 is same with the state(6) to be set 00:24:42.205 [2024-12-09 10:35:19.773347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.205 [2024-12-09 10:35:19.773730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.205 [2024-12-09 10:35:19.773738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.773992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.773998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.206 [2024-12-09 10:35:19.774292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.206 [2024-12-09 10:35:19.774299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8030 is same with the state(6) to be set 00:24:42.206 [2024-12-09 10:35:19.775262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:42.206 [2024-12-09 10:35:19.775278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.775289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.775344] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.775356] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.775366] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.775379] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.775390] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.775450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.775461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:42.207 task offset: 28032 on job bdev=Nvme6n1 fails 00:24:42.207 00:24:42.207 Latency(us) 00:24:42.207 [2024-12-09T09:35:19.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme1n1 ended in about 0.82 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme1n1 : 0.82 161.23 10.08 77.59 0.00 264933.68 17975.59 229688.08 00:24:42.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme2n1 ended in about 0.84 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme2n1 : 0.84 152.81 9.55 76.40 0.00 270905.86 17725.93 220700.28 00:24:42.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme3n1 ended in about 0.84 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme3n1 : 0.84 234.58 14.66 76.21 0.00 195915.10 12857.54 207717.91 00:24:42.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme4n1 ended in about 0.84 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme4n1 : 0.84 228.04 14.25 76.01 0.00 196422.83 13793.77 218702.99 00:24:42.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme5n1 ended in about 0.84 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme5n1 : 0.84 227.46 14.22 75.82 0.00 193107.63 15728.64 213709.78 00:24:42.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme6n1 ended in about 0.81 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme6n1 : 0.81 237.40 14.84 79.13 0.00 180417.65 2980.33 214708.42 00:24:42.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme7n1 ended in about 0.85 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme7n1 : 0.85 231.67 14.48 75.65 0.00 183006.52 14542.75 211712.49 00:24:42.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme8n1 ended in about 0.83 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme8n1 : 0.83 230.37 14.40 76.79 0.00 178824.78 15229.32 199728.76 00:24:42.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme9n1 ended in about 0.85 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme9n1 : 0.85 150.95 9.43 75.47 0.00 238167.77 19723.22 225693.50 00:24:42.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.207 Job: Nvme10n1 ended in about 0.83 seconds with error 00:24:42.207 Verification LBA range: start 0x0 length 0x400 00:24:42.207 Nvme10n1 : 0.83 154.28 9.64 77.14 0.00 227104.67 24092.28 227690.79 00:24:42.207 [2024-12-09T09:35:19.931Z] =================================================================================================================== 00:24:42.207 [2024-12-09T09:35:19.931Z] Total : 2008.78 125.55 766.21 0.00 208774.62 2980.33 229688.08 00:24:42.207 [2024-12-09 10:35:19.806151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:42.207 [2024-12-09 10:35:19.806200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.806218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.806227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.806542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.806560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bee10 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.806569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bee10 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.806792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.806802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca940 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.806815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca940 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.806984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.806996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f6930 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.807003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6930 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.808627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.808646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ef530 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.808658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ef530 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.808803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.808818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4df610 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.808826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4df610 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.808979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.808990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1eb20 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.808997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1eb20 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.809161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.809171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cadd0 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.809178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cadd0 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.809391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.207 [2024-12-09 10:35:19.809402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f5830 with addr=10.0.0.2, port=4420 00:24:42.207 [2024-12-09 10:35:19.809409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f5830 is same with the state(6) to be set 00:24:42.207 [2024-12-09 10:35:19.809422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bee10 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca940 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f6930 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809466] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.809480] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.809494] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.809504] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.809516] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:42.207 [2024-12-09 10:35:19.809797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.809816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:42.207 [2024-12-09 10:35:19.809853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ef530 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4df610 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1eb20 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cadd0 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f5830 (9): Bad file descriptor 00:24:42.207 [2024-12-09 10:35:19.809901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:42.207 [2024-12-09 10:35:19.809907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:42.207 [2024-12-09 10:35:19.809915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:42.207 [2024-12-09 10:35:19.809923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:42.207 [2024-12-09 10:35:19.809930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:42.207 [2024-12-09 10:35:19.809936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:42.207 [2024-12-09 10:35:19.809942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.809948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.809955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.809961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.809967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.809973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.208 [2024-12-09 10:35:19.810211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24b20 with addr=10.0.0.2, port=4420 00:24:42.208 [2024-12-09 10:35:19.810218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24b20 is same with the state(6) to be set 00:24:42.208 [2024-12-09 10:35:19.810359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.208 [2024-12-09 10:35:19.810369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa386d0 with addr=10.0.0.2, port=4420 00:24:42.208 [2024-12-09 10:35:19.810376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa386d0 is same with the state(6) to be set 00:24:42.208 [2024-12-09 10:35:19.810383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24b20 (9): Bad file descriptor 00:24:42.208 [2024-12-09 10:35:19.810726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa386d0 (9): Bad file descriptor 00:24:42.208 [2024-12-09 10:35:19.810750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:42.208 [2024-12-09 10:35:19.810776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:42.208 [2024-12-09 10:35:19.810782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:42.208 [2024-12-09 10:35:19.810789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:42.208 [2024-12-09 10:35:19.810795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:42.466 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2720299 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2720299 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2720299 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.844 rmmod nvme_tcp 00:24:43.844 rmmod nvme_fabrics 00:24:43.844 rmmod nvme_keyring 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2720206 ']' 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2720206 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2720206 ']' 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2720206 00:24:43.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2720206) - No such process 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2720206 is not found' 00:24:43.844 Process with pid 2720206 is not found 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.844 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.751 00:24:45.751 real 0m7.088s 00:24:45.751 user 0m16.179s 00:24:45.751 sys 0m1.288s 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:45.751 ************************************ 00:24:45.751 END TEST nvmf_shutdown_tc3 00:24:45.751 ************************************ 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:45.751 ************************************ 00:24:45.751 START TEST nvmf_shutdown_tc4 00:24:45.751 ************************************ 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:45.751 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:45.751 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:45.751 Found net devices under 0000:86:00.0: cvl_0_0 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.751 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:45.752 Found net devices under 0000:86:00.1: cvl_0_1 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.752 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:24:46.011 00:24:46.011 --- 10.0.0.2 ping statistics --- 00:24:46.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.011 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:24:46.011 00:24:46.011 --- 10.0.0.1 ping statistics --- 00:24:46.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.011 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2721520 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2721520 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2721520 ']' 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.011 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:46.270 [2024-12-09 10:35:23.757643] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:46.270 [2024-12-09 10:35:23.757686] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.270 [2024-12-09 10:35:23.835317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.270 [2024-12-09 10:35:23.875383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.270 [2024-12-09 10:35:23.875419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.270 [2024-12-09 10:35:23.875426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.270 [2024-12-09 10:35:23.875431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.270 [2024-12-09 10:35:23.875436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.270 [2024-12-09 10:35:23.876934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.270 [2024-12-09 10:35:23.877039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.270 [2024-12-09 10:35:23.877123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.270 [2024-12-09 10:35:23.877124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:47.203 [2024-12-09 10:35:24.632262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.203 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.204 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:47.204 Malloc1 00:24:47.204 [2024-12-09 10:35:24.742984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.204 Malloc2 00:24:47.204 Malloc3 00:24:47.204 Malloc4 00:24:47.204 Malloc5 00:24:47.462 Malloc6 00:24:47.462 Malloc7 00:24:47.462 Malloc8 00:24:47.462 Malloc9 00:24:47.462 Malloc10 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2721798 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:47.462 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:47.718 [2024-12-09 10:35:25.253090] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2721520 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2721520 ']' 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2721520 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2721520 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2721520' 00:24:52.978 killing process with pid 2721520 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2721520 00:24:52.978 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2721520 00:24:52.978 [2024-12-09 10:35:30.246302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.246426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109b990 is same with the state(6) to be set 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 starting I/O failed: -6 00:24:52.978 Write completed with error (sct=0, sc=8) 00:24:52.978 [2024-12-09 10:35:30.247429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.247422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.978 [2024-12-09 10:35:30.247456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.247464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with NVMe io qpair process completion error 00:24:52.978 the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.247478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.247485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.978 [2024-12-09 10:35:30.247491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a100 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.247996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a5f0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.248507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aae0 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 [2024-12-09 10:35:30.249092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1099c30 is same with the state(6) to be set 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 [2024-12-09 10:35:30.255784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 [2024-12-09 10:35:30.256650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.979 starting I/O failed: -6 00:24:52.979 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 [2024-12-09 10:35:30.257631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 [2024-12-09 10:35:30.259174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.980 NVMe io qpair process completion error 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 starting I/O failed: -6 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.980 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 [2024-12-09 10:35:30.260066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 [2024-12-09 10:35:30.260090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 [2024-12-09 10:35:30.260105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ [2024-12-09 10:35:30.260135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with transport error -6 (No such device or address) on qpair id 1 00:24:52.981 the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eec0 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 [2024-12-09 10:35:30.260519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 [2024-12-09 10:35:30.260540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with Write completed with error (sct=0, sc=8) 00:24:52.981 the state(6) to be set 00:24:52.981 starting I/O failed: -6 00:24:52.981 [2024-12-09 10:35:30.260561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 [2024-12-09 10:35:30.260573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 [2024-12-09 10:35:30.260580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 [2024-12-09 10:35:30.261021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.981 starting I/O failed: -6 00:24:52.981 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 [2024-12-09 10:35:30.262019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 [2024-12-09 10:35:30.263714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.982 NVMe io qpair process completion error 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 [2024-12-09 10:35:30.264718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.982 Write completed with error (sct=0, sc=8) 00:24:52.982 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 [2024-12-09 10:35:30.265618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 [2024-12-09 10:35:30.266598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.983 starting I/O failed: -6 00:24:52.983 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 [2024-12-09 10:35:30.268414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.984 NVMe io qpair process completion error 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 [2024-12-09 10:35:30.269298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 [2024-12-09 10:35:30.270145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.984 Write completed with error (sct=0, sc=8) 00:24:52.984 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 [2024-12-09 10:35:30.271171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 [2024-12-09 10:35:30.273190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.985 NVMe io qpair process completion error 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 Write completed with error (sct=0, sc=8) 00:24:52.985 starting I/O failed: -6 00:24:52.986 [2024-12-09 10:35:30.274182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 [2024-12-09 10:35:30.275049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 [2024-12-09 10:35:30.276065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.986 Write completed with error (sct=0, sc=8) 00:24:52.986 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 [2024-12-09 10:35:30.279413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.987 NVMe io qpair process completion error 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 [2024-12-09 10:35:30.280467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 [2024-12-09 10:35:30.281366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.987 starting I/O failed: -6 00:24:52.987 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 [2024-12-09 10:35:30.282363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 starting I/O failed: -6 00:24:52.988 [2024-12-09 10:35:30.285512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.988 NVMe io qpair process completion error 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.988 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 [2024-12-09 10:35:30.288894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.989 starting I/O failed: -6 00:24:52.989 starting I/O failed: -6 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.989 starting I/O failed: -6 00:24:52.989 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 [2024-12-09 10:35:30.289830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 [2024-12-09 10:35:30.290818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.990 starting I/O failed: -6 00:24:52.990 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 [2024-12-09 10:35:30.292439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.991 NVMe io qpair process completion error 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 [2024-12-09 10:35:30.293613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 [2024-12-09 10:35:30.294460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.991 Write completed with error (sct=0, sc=8) 00:24:52.991 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 [2024-12-09 10:35:30.295603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 [2024-12-09 10:35:30.303751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.992 NVMe io qpair process completion error 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.992 starting I/O failed: -6 00:24:52.992 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 [2024-12-09 10:35:30.304652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 [2024-12-09 10:35:30.305589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.993 starting I/O failed: -6 00:24:52.993 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 [2024-12-09 10:35:30.306607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 Write completed with error (sct=0, sc=8) 00:24:52.994 starting I/O failed: -6 00:24:52.994 [2024-12-09 10:35:30.308981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:52.994 NVMe io qpair process completion error 00:24:52.994 Initializing NVMe Controllers 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:52.994 Controller IO queue size 128, less than required. 00:24:52.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:52.995 Controller IO queue size 128, less than required. 00:24:52.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:52.995 Controller IO queue size 128, less than required. 00:24:52.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:52.995 Controller IO queue size 128, less than required. 00:24:52.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:52.995 Controller IO queue size 128, less than required. 00:24:52.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:52.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:52.995 Initialization complete. Launching workers. 00:24:52.995 ======================================================== 00:24:52.995 Latency(us) 00:24:52.995 Device Information : IOPS MiB/s Average min max 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2250.51 96.70 56881.18 674.02 107541.53 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2239.62 96.23 57172.47 718.90 106527.52 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2197.37 94.42 58304.45 743.98 105171.00 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2233.08 95.95 57345.46 828.57 101274.46 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2218.49 95.33 57762.86 687.34 108914.46 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2226.55 95.67 57575.73 729.05 111323.98 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2166.23 93.08 59265.24 624.77 98838.56 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2186.70 93.96 58041.48 730.32 97595.90 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2137.05 91.83 59399.56 884.99 97788.94 00:24:52.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2158.39 92.74 58823.92 939.08 97300.25 00:24:52.995 ======================================================== 00:24:52.995 Total : 22013.98 945.91 58043.89 624.77 111323.98 00:24:52.995 00:24:52.995 [2024-12-09 10:35:30.311920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f560 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.311964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f890 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.311996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d91900 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d91720 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d90740 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d90a70 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d90410 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d91ae0 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8fef0 is same with the state(6) to be set 00:24:52.995 [2024-12-09 10:35:30.312195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8fbc0 is same with the state(6) to be set 00:24:52.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:52.995 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2721798 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2721798 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2721798 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:53.956 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.957 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.957 rmmod nvme_tcp 00:24:53.957 rmmod nvme_fabrics 00:24:54.216 rmmod nvme_keyring 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2721520 ']' 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2721520 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2721520 ']' 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2721520 00:24:54.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2721520) - No such process 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2721520 is not found' 00:24:54.216 Process with pid 2721520 is not found 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.216 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.120 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.120 00:24:56.120 real 0m10.442s 00:24:56.120 user 0m27.582s 00:24:56.120 sys 0m5.155s 00:24:56.120 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.120 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:56.120 ************************************ 00:24:56.120 END TEST nvmf_shutdown_tc4 00:24:56.120 ************************************ 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:56.379 00:24:56.379 real 0m42.006s 00:24:56.379 user 1m45.332s 00:24:56.379 sys 0m13.940s 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:56.379 ************************************ 00:24:56.379 END TEST nvmf_shutdown 00:24:56.379 ************************************ 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:56.379 ************************************ 00:24:56.379 START TEST nvmf_nsid 00:24:56.379 ************************************ 00:24:56.379 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:56.379 * Looking for test storage... 00:24:56.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:56.380 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:56.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.639 --rc genhtml_branch_coverage=1 00:24:56.639 --rc genhtml_function_coverage=1 00:24:56.639 --rc genhtml_legend=1 00:24:56.639 --rc geninfo_all_blocks=1 00:24:56.639 --rc geninfo_unexecuted_blocks=1 00:24:56.639 00:24:56.639 ' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:56.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.639 --rc genhtml_branch_coverage=1 00:24:56.639 --rc genhtml_function_coverage=1 00:24:56.639 --rc genhtml_legend=1 00:24:56.639 --rc geninfo_all_blocks=1 00:24:56.639 --rc geninfo_unexecuted_blocks=1 00:24:56.639 00:24:56.639 ' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:56.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.639 --rc genhtml_branch_coverage=1 00:24:56.639 --rc genhtml_function_coverage=1 00:24:56.639 --rc genhtml_legend=1 00:24:56.639 --rc geninfo_all_blocks=1 00:24:56.639 --rc geninfo_unexecuted_blocks=1 00:24:56.639 00:24:56.639 ' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:56.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.639 --rc genhtml_branch_coverage=1 00:24:56.639 --rc genhtml_function_coverage=1 00:24:56.639 --rc genhtml_legend=1 00:24:56.639 --rc geninfo_all_blocks=1 00:24:56.639 --rc geninfo_unexecuted_blocks=1 00:24:56.639 00:24:56.639 ' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.639 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.640 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:03.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:03.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:03.210 Found net devices under 0000:86:00.0: cvl_0_0 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:03.210 Found net devices under 0000:86:00.1: cvl_0_1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.210 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.210 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.210 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:25:03.211 00:25:03.211 --- 10.0.0.2 ping statistics --- 00:25:03.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.211 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:25:03.211 00:25:03.211 --- 10.0.0.1 ping statistics --- 00:25:03.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.211 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2726282 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2726282 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2726282 ']' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.211 [2024-12-09 10:35:40.152660] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:03.211 [2024-12-09 10:35:40.152705] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.211 [2024-12-09 10:35:40.232585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.211 [2024-12-09 10:35:40.273389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.211 [2024-12-09 10:35:40.273424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.211 [2024-12-09 10:35:40.273432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.211 [2024-12-09 10:35:40.273438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.211 [2024-12-09 10:35:40.273442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.211 [2024-12-09 10:35:40.274001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2726305 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6383dee4-8cbf-43f4-bbc5-7e47a39bd6b4 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2732e86e-bd0b-4451-95f2-4795ab7abaae 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5637f5bb-1ac6-4f6f-a1a5-e577ba817eec 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.211 null0 00:25:03.211 null1 00:25:03.211 null2 00:25:03.211 [2024-12-09 10:35:40.458833] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:03.211 [2024-12-09 10:35:40.458876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726305 ] 00:25:03.211 [2024-12-09 10:35:40.461559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.211 [2024-12-09 10:35:40.485739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2726305 /var/tmp/tgt2.sock 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2726305 ']' 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:03.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:03.211 [2024-12-09 10:35:40.534397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.211 [2024-12-09 10:35:40.580352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:03.211 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:03.477 [2024-12-09 10:35:41.109185] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.477 [2024-12-09 10:35:41.125293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:03.477 nvme0n1 nvme0n2 00:25:03.477 nvme1n1 00:25:03.477 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:03.477 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:03.477 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:04.858 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:04.859 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6383dee4-8cbf-43f4-bbc5-7e47a39bd6b4 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6383dee48cbf43f4bbc57e47a39bd6b4 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6383DEE48CBF43F4BBC57E47A39BD6B4 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6383DEE48CBF43F4BBC57E47A39BD6B4 == \6\3\8\3\D\E\E\4\8\C\B\F\4\3\F\4\B\B\C\5\7\E\4\7\A\3\9\B\D\6\B\4 ]] 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2732e86e-bd0b-4451-95f2-4795ab7abaae 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2732e86ebd0b445195f24795ab7abaae 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2732E86EBD0B445195F24795AB7ABAAE 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2732E86EBD0B445195F24795AB7ABAAE == \2\7\3\2\E\8\6\E\B\D\0\B\4\4\5\1\9\5\F\2\4\7\9\5\A\B\7\A\B\A\A\E ]] 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5637f5bb-1ac6-4f6f-a1a5-e577ba817eec 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5637f5bb1ac64f6fa1a5e577ba817eec 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5637F5BB1AC64F6FA1A5E577BA817EEC 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5637F5BB1AC64F6FA1A5E577BA817EEC == \5\6\3\7\F\5\B\B\1\A\C\6\4\F\6\F\A\1\A\5\E\5\7\7\B\A\8\1\7\E\E\C ]] 00:25:05.795 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2726305 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2726305 ']' 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2726305 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726305 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726305' 00:25:06.054 killing process with pid 2726305 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2726305 00:25:06.054 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2726305 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.314 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.573 rmmod nvme_tcp 00:25:06.573 rmmod nvme_fabrics 00:25:06.573 rmmod nvme_keyring 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2726282 ']' 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2726282 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2726282 ']' 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2726282 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726282 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726282' 00:25:06.573 killing process with pid 2726282 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2726282 00:25:06.573 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2726282 00:25:06.834 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.834 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.835 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.891 00:25:08.891 real 0m12.437s 00:25:08.891 user 0m9.729s 00:25:08.891 sys 0m5.499s 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:08.891 ************************************ 00:25:08.891 END TEST nvmf_nsid 00:25:08.891 ************************************ 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:08.891 00:25:08.891 real 12m4.521s 00:25:08.891 user 26m3.969s 00:25:08.891 sys 3m43.475s 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.891 10:35:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.891 ************************************ 00:25:08.891 END TEST nvmf_target_extra 00:25:08.891 ************************************ 00:25:08.891 10:35:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:08.891 10:35:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.891 10:35:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.891 10:35:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.891 ************************************ 00:25:08.891 START TEST nvmf_host 00:25:08.891 ************************************ 00:25:08.891 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:08.891 * Looking for test storage... 00:25:08.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:08.891 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:08.891 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:08.891 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:09.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.150 --rc genhtml_branch_coverage=1 00:25:09.150 --rc genhtml_function_coverage=1 00:25:09.150 --rc genhtml_legend=1 00:25:09.150 --rc geninfo_all_blocks=1 00:25:09.150 --rc geninfo_unexecuted_blocks=1 00:25:09.150 00:25:09.150 ' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:09.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.150 --rc genhtml_branch_coverage=1 00:25:09.150 --rc genhtml_function_coverage=1 00:25:09.150 --rc genhtml_legend=1 00:25:09.150 --rc geninfo_all_blocks=1 00:25:09.150 --rc geninfo_unexecuted_blocks=1 00:25:09.150 00:25:09.150 ' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:09.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.150 --rc genhtml_branch_coverage=1 00:25:09.150 --rc genhtml_function_coverage=1 00:25:09.150 --rc genhtml_legend=1 00:25:09.150 --rc geninfo_all_blocks=1 00:25:09.150 --rc geninfo_unexecuted_blocks=1 00:25:09.150 00:25:09.150 ' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:09.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.150 --rc genhtml_branch_coverage=1 00:25:09.150 --rc genhtml_function_coverage=1 00:25:09.150 --rc genhtml_legend=1 00:25:09.150 --rc geninfo_all_blocks=1 00:25:09.150 --rc geninfo_unexecuted_blocks=1 00:25:09.150 00:25:09.150 ' 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.150 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.151 ************************************ 00:25:09.151 START TEST nvmf_multicontroller 00:25:09.151 ************************************ 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:09.151 * Looking for test storage... 00:25:09.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:25:09.151 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:09.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.410 --rc genhtml_branch_coverage=1 00:25:09.410 --rc genhtml_function_coverage=1 00:25:09.410 --rc genhtml_legend=1 00:25:09.410 --rc geninfo_all_blocks=1 00:25:09.410 --rc geninfo_unexecuted_blocks=1 00:25:09.410 00:25:09.410 ' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:09.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.410 --rc genhtml_branch_coverage=1 00:25:09.410 --rc genhtml_function_coverage=1 00:25:09.410 --rc genhtml_legend=1 00:25:09.410 --rc geninfo_all_blocks=1 00:25:09.410 --rc geninfo_unexecuted_blocks=1 00:25:09.410 00:25:09.410 ' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:09.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.410 --rc genhtml_branch_coverage=1 00:25:09.410 --rc genhtml_function_coverage=1 00:25:09.410 --rc genhtml_legend=1 00:25:09.410 --rc geninfo_all_blocks=1 00:25:09.410 --rc geninfo_unexecuted_blocks=1 00:25:09.410 00:25:09.410 ' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:09.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.410 --rc genhtml_branch_coverage=1 00:25:09.410 --rc genhtml_function_coverage=1 00:25:09.410 --rc genhtml_legend=1 00:25:09.410 --rc geninfo_all_blocks=1 00:25:09.410 --rc geninfo_unexecuted_blocks=1 00:25:09.410 00:25:09.410 ' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:09.410 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.411 10:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.979 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.979 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.979 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.979 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.979 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:25:15.980 00:25:15.980 --- 10.0.0.2 ping statistics --- 00:25:15.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.980 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:25:15.980 00:25:15.980 --- 10.0.0.1 ping statistics --- 00:25:15.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.980 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2730614 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2730614 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2730614 ']' 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.980 10:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 [2024-12-09 10:35:52.969476] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:15.980 [2024-12-09 10:35:52.969528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.980 [2024-12-09 10:35:53.052841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.980 [2024-12-09 10:35:53.094786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.980 [2024-12-09 10:35:53.094825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.980 [2024-12-09 10:35:53.094832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.980 [2024-12-09 10:35:53.094838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.980 [2024-12-09 10:35:53.094843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.980 [2024-12-09 10:35:53.096278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.980 [2024-12-09 10:35:53.096386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.980 [2024-12-09 10:35:53.096387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.239 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 [2024-12-09 10:35:53.844179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 Malloc0 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 [2024-12-09 10:35:53.905476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 [2024-12-09 10:35:53.913428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 Malloc1 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.240 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2730861 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2730861 /var/tmp/bdevperf.sock 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2730861 ']' 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.499 10:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 NVMe0n1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.757 1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 request: 00:25:16.757 { 00:25:16.757 "name": "NVMe0", 00:25:16.757 "trtype": "tcp", 00:25:16.757 "traddr": "10.0.0.2", 00:25:16.757 "adrfam": "ipv4", 00:25:16.757 "trsvcid": "4420", 00:25:16.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.757 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:16.757 "hostaddr": "10.0.0.1", 00:25:16.757 "prchk_reftag": false, 00:25:16.757 "prchk_guard": false, 00:25:16.757 "hdgst": false, 00:25:16.757 "ddgst": false, 00:25:16.757 "allow_unrecognized_csi": false, 00:25:16.757 "method": "bdev_nvme_attach_controller", 00:25:16.757 "req_id": 1 00:25:16.757 } 00:25:16.757 Got JSON-RPC error response 00:25:16.757 response: 00:25:16.757 { 00:25:16.757 "code": -114, 00:25:16.757 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:16.757 } 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:16.757 request: 00:25:16.757 { 00:25:16.757 "name": "NVMe0", 00:25:16.757 "trtype": "tcp", 00:25:16.757 "traddr": "10.0.0.2", 00:25:16.757 "adrfam": "ipv4", 00:25:16.757 "trsvcid": "4420", 00:25:16.757 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:16.757 "hostaddr": "10.0.0.1", 00:25:16.757 "prchk_reftag": false, 00:25:16.757 "prchk_guard": false, 00:25:16.757 "hdgst": false, 00:25:16.757 "ddgst": false, 00:25:16.757 "allow_unrecognized_csi": false, 00:25:16.757 "method": "bdev_nvme_attach_controller", 00:25:16.757 "req_id": 1 00:25:16.757 } 00:25:16.757 Got JSON-RPC error response 00:25:16.757 response: 00:25:16.757 { 00:25:16.757 "code": -114, 00:25:16.757 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:16.757 } 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:16.757 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.015 request: 00:25:17.015 { 00:25:17.015 "name": "NVMe0", 00:25:17.015 "trtype": "tcp", 00:25:17.015 "traddr": "10.0.0.2", 00:25:17.015 "adrfam": "ipv4", 00:25:17.015 "trsvcid": "4420", 00:25:17.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.015 "hostaddr": "10.0.0.1", 00:25:17.015 "prchk_reftag": false, 00:25:17.015 "prchk_guard": false, 00:25:17.015 "hdgst": false, 00:25:17.015 "ddgst": false, 00:25:17.015 "multipath": "disable", 00:25:17.015 "allow_unrecognized_csi": false, 00:25:17.015 "method": "bdev_nvme_attach_controller", 00:25:17.015 "req_id": 1 00:25:17.015 } 00:25:17.015 Got JSON-RPC error response 00:25:17.015 response: 00:25:17.015 { 00:25:17.015 "code": -114, 00:25:17.015 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:17.015 } 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.015 request: 00:25:17.015 { 00:25:17.015 "name": "NVMe0", 00:25:17.015 "trtype": "tcp", 00:25:17.015 "traddr": "10.0.0.2", 00:25:17.015 "adrfam": "ipv4", 00:25:17.015 "trsvcid": "4420", 00:25:17.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.015 "hostaddr": "10.0.0.1", 00:25:17.015 "prchk_reftag": false, 00:25:17.015 "prchk_guard": false, 00:25:17.015 "hdgst": false, 00:25:17.015 "ddgst": false, 00:25:17.015 "multipath": "failover", 00:25:17.015 "allow_unrecognized_csi": false, 00:25:17.015 "method": "bdev_nvme_attach_controller", 00:25:17.015 "req_id": 1 00:25:17.015 } 00:25:17.015 Got JSON-RPC error response 00:25:17.015 response: 00:25:17.015 { 00:25:17.015 "code": -114, 00:25:17.015 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:17.015 } 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.015 NVMe0n1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.015 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.272 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:17.272 10:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:18.218 { 00:25:18.218 "results": [ 00:25:18.218 { 00:25:18.218 "job": "NVMe0n1", 00:25:18.218 "core_mask": "0x1", 00:25:18.218 "workload": "write", 00:25:18.218 "status": "finished", 00:25:18.218 "queue_depth": 128, 00:25:18.218 "io_size": 4096, 00:25:18.218 "runtime": 1.005871, 00:25:18.218 "iops": 24892.85405384985, 00:25:18.218 "mibps": 97.23771114785097, 00:25:18.218 "io_failed": 0, 00:25:18.218 "io_timeout": 0, 00:25:18.218 "avg_latency_us": 5129.889947985904, 00:25:18.218 "min_latency_us": 2980.327619047619, 00:25:18.218 "max_latency_us": 10423.344761904762 00:25:18.218 } 00:25:18.218 ], 00:25:18.218 "core_count": 1 00:25:18.218 } 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2730861 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2730861 ']' 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2730861 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.218 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730861 00:25:18.476 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.476 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.476 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730861' 00:25:18.476 killing process with pid 2730861 00:25:18.476 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2730861 00:25:18.476 10:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2730861 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:18.476 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:18.476 [2024-12-09 10:35:54.015389] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:18.476 [2024-12-09 10:35:54.015443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730861 ] 00:25:18.476 [2024-12-09 10:35:54.090978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.476 [2024-12-09 10:35:54.133320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.476 [2024-12-09 10:35:54.750961] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name d7bae88d-53a7-4a8b-9440-420d613890fe already exists 00:25:18.476 [2024-12-09 10:35:54.750987] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:d7bae88d-53a7-4a8b-9440-420d613890fe alias for bdev NVMe1n1 00:25:18.476 [2024-12-09 10:35:54.750995] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:18.476 Running I/O for 1 seconds... 00:25:18.476 24848.00 IOPS, 97.06 MiB/s 00:25:18.476 Latency(us) 00:25:18.476 [2024-12-09T09:35:56.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.476 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:18.476 NVMe0n1 : 1.01 24892.85 97.24 0.00 0.00 5129.89 2980.33 10423.34 00:25:18.476 [2024-12-09T09:35:56.200Z] =================================================================================================================== 00:25:18.476 [2024-12-09T09:35:56.200Z] Total : 24892.85 97.24 0.00 0.00 5129.89 2980.33 10423.34 00:25:18.476 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.476 00:25:18.476 Latency(us) 00:25:18.476 [2024-12-09T09:35:56.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.476 [2024-12-09T09:35:56.200Z] =================================================================================================================== 00:25:18.476 [2024-12-09T09:35:56.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.476 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.476 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.476 rmmod nvme_tcp 00:25:18.476 rmmod nvme_fabrics 00:25:18.735 rmmod nvme_keyring 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2730614 ']' 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2730614 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2730614 ']' 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2730614 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730614 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730614' 00:25:18.735 killing process with pid 2730614 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2730614 00:25:18.735 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2730614 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.993 10:35:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.894 00:25:20.894 real 0m11.847s 00:25:20.894 user 0m14.328s 00:25:20.894 sys 0m5.278s 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.894 ************************************ 00:25:20.894 END TEST nvmf_multicontroller 00:25:20.894 ************************************ 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.894 10:35:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.153 ************************************ 00:25:21.153 START TEST nvmf_aer 00:25:21.153 ************************************ 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:21.153 * Looking for test storage... 00:25:21.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:21.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.153 --rc genhtml_branch_coverage=1 00:25:21.153 --rc genhtml_function_coverage=1 00:25:21.153 --rc genhtml_legend=1 00:25:21.153 --rc geninfo_all_blocks=1 00:25:21.153 --rc geninfo_unexecuted_blocks=1 00:25:21.153 00:25:21.153 ' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:21.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.153 --rc genhtml_branch_coverage=1 00:25:21.153 --rc genhtml_function_coverage=1 00:25:21.153 --rc genhtml_legend=1 00:25:21.153 --rc geninfo_all_blocks=1 00:25:21.153 --rc geninfo_unexecuted_blocks=1 00:25:21.153 00:25:21.153 ' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:21.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.153 --rc genhtml_branch_coverage=1 00:25:21.153 --rc genhtml_function_coverage=1 00:25:21.153 --rc genhtml_legend=1 00:25:21.153 --rc geninfo_all_blocks=1 00:25:21.153 --rc geninfo_unexecuted_blocks=1 00:25:21.153 00:25:21.153 ' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:21.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.153 --rc genhtml_branch_coverage=1 00:25:21.153 --rc genhtml_function_coverage=1 00:25:21.153 --rc genhtml_legend=1 00:25:21.153 --rc geninfo_all_blocks=1 00:25:21.153 --rc geninfo_unexecuted_blocks=1 00:25:21.153 00:25:21.153 ' 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.153 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.154 10:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:27.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:27.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.719 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:27.720 Found net devices under 0000:86:00.0: cvl_0_0 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:27.720 Found net devices under 0000:86:00.1: cvl_0_1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:25:27.720 00:25:27.720 --- 10.0.0.2 ping statistics --- 00:25:27.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.720 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:27.720 00:25:27.720 --- 10.0.0.1 ping statistics --- 00:25:27.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.720 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2734763 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2734763 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2734763 ']' 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.720 10:36:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 [2024-12-09 10:36:04.837758] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:27.720 [2024-12-09 10:36:04.837806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.720 [2024-12-09 10:36:04.918901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.720 [2024-12-09 10:36:04.965253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.720 [2024-12-09 10:36:04.965285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.720 [2024-12-09 10:36:04.965292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.720 [2024-12-09 10:36:04.965298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.720 [2024-12-09 10:36:04.965303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.720 [2024-12-09 10:36:04.966665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.720 [2024-12-09 10:36:04.966774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.720 [2024-12-09 10:36:04.966880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.720 [2024-12-09 10:36:04.966880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.976 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.976 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:27.976 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.976 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.976 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.233 [2024-12-09 10:36:05.711467] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.233 Malloc0 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.233 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.234 [2024-12-09 10:36:05.771647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.234 [ 00:25:28.234 { 00:25:28.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:28.234 "subtype": "Discovery", 00:25:28.234 "listen_addresses": [], 00:25:28.234 "allow_any_host": true, 00:25:28.234 "hosts": [] 00:25:28.234 }, 00:25:28.234 { 00:25:28.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.234 "subtype": "NVMe", 00:25:28.234 "listen_addresses": [ 00:25:28.234 { 00:25:28.234 "trtype": "TCP", 00:25:28.234 "adrfam": "IPv4", 00:25:28.234 "traddr": "10.0.0.2", 00:25:28.234 "trsvcid": "4420" 00:25:28.234 } 00:25:28.234 ], 00:25:28.234 "allow_any_host": true, 00:25:28.234 "hosts": [], 00:25:28.234 "serial_number": "SPDK00000000000001", 00:25:28.234 "model_number": "SPDK bdev Controller", 00:25:28.234 "max_namespaces": 2, 00:25:28.234 "min_cntlid": 1, 00:25:28.234 "max_cntlid": 65519, 00:25:28.234 "namespaces": [ 00:25:28.234 { 00:25:28.234 "nsid": 1, 00:25:28.234 "bdev_name": "Malloc0", 00:25:28.234 "name": "Malloc0", 00:25:28.234 "nguid": "961599969C764178A6287ABDD65166C1", 00:25:28.234 "uuid": "96159996-9c76-4178-a628-7abdd65166c1" 00:25:28.234 } 00:25:28.234 ] 00:25:28.234 } 00:25:28.234 ] 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2735015 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:28.234 10:36:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.491 Malloc1 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.491 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.491 Asynchronous Event Request test 00:25:28.491 Attaching to 10.0.0.2 00:25:28.491 Attached to 10.0.0.2 00:25:28.491 Registering asynchronous event callbacks... 00:25:28.491 Starting namespace attribute notice tests for all controllers... 00:25:28.491 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:28.491 aer_cb - Changed Namespace 00:25:28.491 Cleaning up... 00:25:28.492 [ 00:25:28.492 { 00:25:28.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:28.492 "subtype": "Discovery", 00:25:28.492 "listen_addresses": [], 00:25:28.492 "allow_any_host": true, 00:25:28.492 "hosts": [] 00:25:28.492 }, 00:25:28.492 { 00:25:28.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.492 "subtype": "NVMe", 00:25:28.492 "listen_addresses": [ 00:25:28.492 { 00:25:28.492 "trtype": "TCP", 00:25:28.492 "adrfam": "IPv4", 00:25:28.492 "traddr": "10.0.0.2", 00:25:28.492 "trsvcid": "4420" 00:25:28.492 } 00:25:28.492 ], 00:25:28.492 "allow_any_host": true, 00:25:28.492 "hosts": [], 00:25:28.492 "serial_number": "SPDK00000000000001", 00:25:28.492 "model_number": "SPDK bdev Controller", 00:25:28.492 "max_namespaces": 2, 00:25:28.492 "min_cntlid": 1, 00:25:28.492 "max_cntlid": 65519, 00:25:28.492 "namespaces": [ 00:25:28.492 { 00:25:28.492 "nsid": 1, 00:25:28.492 "bdev_name": "Malloc0", 00:25:28.492 "name": "Malloc0", 00:25:28.492 "nguid": "961599969C764178A6287ABDD65166C1", 00:25:28.492 "uuid": "96159996-9c76-4178-a628-7abdd65166c1" 00:25:28.492 }, 00:25:28.492 { 00:25:28.492 "nsid": 2, 00:25:28.492 "bdev_name": "Malloc1", 00:25:28.492 "name": "Malloc1", 00:25:28.492 "nguid": "F07B8764B4534F62B03FC13EC7814507", 00:25:28.492 "uuid": "f07b8764-b453-4f62-b03f-c13ec7814507" 00:25:28.492 } 00:25:28.492 ] 00:25:28.492 } 00:25:28.492 ] 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2735015 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.492 rmmod nvme_tcp 00:25:28.492 rmmod nvme_fabrics 00:25:28.492 rmmod nvme_keyring 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2734763 ']' 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2734763 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2734763 ']' 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2734763 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.492 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734763 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734763' 00:25:28.750 killing process with pid 2734763 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2734763 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2734763 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.750 10:36:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.280 00:25:31.280 real 0m9.849s 00:25:31.280 user 0m7.708s 00:25:31.280 sys 0m4.902s 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.280 ************************************ 00:25:31.280 END TEST nvmf_aer 00:25:31.280 ************************************ 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.280 ************************************ 00:25:31.280 START TEST nvmf_async_init 00:25:31.280 ************************************ 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:31.280 * Looking for test storage... 00:25:31.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:31.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.280 --rc genhtml_branch_coverage=1 00:25:31.280 --rc genhtml_function_coverage=1 00:25:31.280 --rc genhtml_legend=1 00:25:31.280 --rc geninfo_all_blocks=1 00:25:31.280 --rc geninfo_unexecuted_blocks=1 00:25:31.280 00:25:31.280 ' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:31.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.280 --rc genhtml_branch_coverage=1 00:25:31.280 --rc genhtml_function_coverage=1 00:25:31.280 --rc genhtml_legend=1 00:25:31.280 --rc geninfo_all_blocks=1 00:25:31.280 --rc geninfo_unexecuted_blocks=1 00:25:31.280 00:25:31.280 ' 00:25:31.280 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:31.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.280 --rc genhtml_branch_coverage=1 00:25:31.280 --rc genhtml_function_coverage=1 00:25:31.280 --rc genhtml_legend=1 00:25:31.280 --rc geninfo_all_blocks=1 00:25:31.280 --rc geninfo_unexecuted_blocks=1 00:25:31.280 00:25:31.280 ' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:31.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.281 --rc genhtml_branch_coverage=1 00:25:31.281 --rc genhtml_function_coverage=1 00:25:31.281 --rc genhtml_legend=1 00:25:31.281 --rc geninfo_all_blocks=1 00:25:31.281 --rc geninfo_unexecuted_blocks=1 00:25:31.281 00:25:31.281 ' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=88caff73c5a94efbb8ab33fda7a68621 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.281 10:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.849 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.850 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.850 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:25:37.850 00:25:37.850 --- 10.0.0.2 ping statistics --- 00:25:37.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.850 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:37.850 00:25:37.850 --- 10.0.0.1 ping statistics --- 00:25:37.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.850 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2738922 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2738922 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2738922 ']' 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.850 10:36:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:37.850 [2024-12-09 10:36:14.768551] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:37.850 [2024-12-09 10:36:14.768601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.850 [2024-12-09 10:36:14.849611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.850 [2024-12-09 10:36:14.889373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.850 [2024-12-09 10:36:14.889411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.850 [2024-12-09 10:36:14.889418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.851 [2024-12-09 10:36:14.889423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.851 [2024-12-09 10:36:14.889428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.851 [2024-12-09 10:36:14.890027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 [2024-12-09 10:36:15.648407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 null0 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 88caff73c5a94efbb8ab33fda7a68621 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.109 [2024-12-09 10:36:15.692652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.109 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 nvme0n1 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 [ 00:25:38.367 { 00:25:38.367 "name": "nvme0n1", 00:25:38.367 "aliases": [ 00:25:38.367 "88caff73-c5a9-4efb-b8ab-33fda7a68621" 00:25:38.367 ], 00:25:38.367 "product_name": "NVMe disk", 00:25:38.367 "block_size": 512, 00:25:38.367 "num_blocks": 2097152, 00:25:38.367 "uuid": "88caff73-c5a9-4efb-b8ab-33fda7a68621", 00:25:38.367 "numa_id": 1, 00:25:38.367 "assigned_rate_limits": { 00:25:38.367 "rw_ios_per_sec": 0, 00:25:38.367 "rw_mbytes_per_sec": 0, 00:25:38.367 "r_mbytes_per_sec": 0, 00:25:38.367 "w_mbytes_per_sec": 0 00:25:38.367 }, 00:25:38.367 "claimed": false, 00:25:38.367 "zoned": false, 00:25:38.367 "supported_io_types": { 00:25:38.367 "read": true, 00:25:38.367 "write": true, 00:25:38.367 "unmap": false, 00:25:38.367 "flush": true, 00:25:38.367 "reset": true, 00:25:38.367 "nvme_admin": true, 00:25:38.367 "nvme_io": true, 00:25:38.367 "nvme_io_md": false, 00:25:38.367 "write_zeroes": true, 00:25:38.367 "zcopy": false, 00:25:38.367 "get_zone_info": false, 00:25:38.367 "zone_management": false, 00:25:38.367 "zone_append": false, 00:25:38.367 "compare": true, 00:25:38.367 "compare_and_write": true, 00:25:38.367 "abort": true, 00:25:38.367 "seek_hole": false, 00:25:38.367 "seek_data": false, 00:25:38.367 "copy": true, 00:25:38.367 "nvme_iov_md": false 00:25:38.367 }, 00:25:38.367 "memory_domains": [ 00:25:38.367 { 00:25:38.367 "dma_device_id": "system", 00:25:38.367 "dma_device_type": 1 00:25:38.367 } 00:25:38.367 ], 00:25:38.367 "driver_specific": { 00:25:38.367 "nvme": [ 00:25:38.367 { 00:25:38.367 "trid": { 00:25:38.367 "trtype": "TCP", 00:25:38.367 "adrfam": "IPv4", 00:25:38.367 "traddr": "10.0.0.2", 00:25:38.367 "trsvcid": "4420", 00:25:38.367 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:38.367 }, 00:25:38.367 "ctrlr_data": { 00:25:38.367 "cntlid": 1, 00:25:38.367 "vendor_id": "0x8086", 00:25:38.367 "model_number": "SPDK bdev Controller", 00:25:38.367 "serial_number": "00000000000000000000", 00:25:38.367 "firmware_revision": "25.01", 00:25:38.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.367 "oacs": { 00:25:38.367 "security": 0, 00:25:38.367 "format": 0, 00:25:38.367 "firmware": 0, 00:25:38.367 "ns_manage": 0 00:25:38.367 }, 00:25:38.367 "multi_ctrlr": true, 00:25:38.367 "ana_reporting": false 00:25:38.367 }, 00:25:38.367 "vs": { 00:25:38.367 "nvme_version": "1.3" 00:25:38.367 }, 00:25:38.367 "ns_data": { 00:25:38.367 "id": 1, 00:25:38.367 "can_share": true 00:25:38.367 } 00:25:38.367 } 00:25:38.367 ], 00:25:38.367 "mp_policy": "active_passive" 00:25:38.367 } 00:25:38.367 } 00:25:38.367 ] 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 10:36:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.367 [2024-12-09 10:36:15.954240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:38.367 [2024-12-09 10:36:15.954299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181fc00 (9): Bad file descriptor 00:25:38.367 [2024-12-09 10:36:16.085886] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:38.367 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.367 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:38.367 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.367 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.625 [ 00:25:38.625 { 00:25:38.625 "name": "nvme0n1", 00:25:38.625 "aliases": [ 00:25:38.625 "88caff73-c5a9-4efb-b8ab-33fda7a68621" 00:25:38.625 ], 00:25:38.625 "product_name": "NVMe disk", 00:25:38.625 "block_size": 512, 00:25:38.625 "num_blocks": 2097152, 00:25:38.625 "uuid": "88caff73-c5a9-4efb-b8ab-33fda7a68621", 00:25:38.625 "numa_id": 1, 00:25:38.625 "assigned_rate_limits": { 00:25:38.625 "rw_ios_per_sec": 0, 00:25:38.625 "rw_mbytes_per_sec": 0, 00:25:38.625 "r_mbytes_per_sec": 0, 00:25:38.625 "w_mbytes_per_sec": 0 00:25:38.625 }, 00:25:38.625 "claimed": false, 00:25:38.625 "zoned": false, 00:25:38.625 "supported_io_types": { 00:25:38.625 "read": true, 00:25:38.625 "write": true, 00:25:38.625 "unmap": false, 00:25:38.625 "flush": true, 00:25:38.625 "reset": true, 00:25:38.625 "nvme_admin": true, 00:25:38.625 "nvme_io": true, 00:25:38.625 "nvme_io_md": false, 00:25:38.625 "write_zeroes": true, 00:25:38.625 "zcopy": false, 00:25:38.625 "get_zone_info": false, 00:25:38.625 "zone_management": false, 00:25:38.625 "zone_append": false, 00:25:38.625 "compare": true, 00:25:38.625 "compare_and_write": true, 00:25:38.625 "abort": true, 00:25:38.625 "seek_hole": false, 00:25:38.625 "seek_data": false, 00:25:38.625 "copy": true, 00:25:38.625 "nvme_iov_md": false 00:25:38.625 }, 00:25:38.626 "memory_domains": [ 00:25:38.626 { 00:25:38.626 "dma_device_id": "system", 00:25:38.626 "dma_device_type": 1 00:25:38.626 } 00:25:38.626 ], 00:25:38.626 "driver_specific": { 00:25:38.626 "nvme": [ 00:25:38.626 { 00:25:38.626 "trid": { 00:25:38.626 "trtype": "TCP", 00:25:38.626 "adrfam": "IPv4", 00:25:38.626 "traddr": "10.0.0.2", 00:25:38.626 "trsvcid": "4420", 00:25:38.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:38.626 }, 00:25:38.626 "ctrlr_data": { 00:25:38.626 "cntlid": 2, 00:25:38.626 "vendor_id": "0x8086", 00:25:38.626 "model_number": "SPDK bdev Controller", 00:25:38.626 "serial_number": "00000000000000000000", 00:25:38.626 "firmware_revision": "25.01", 00:25:38.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.626 "oacs": { 00:25:38.626 "security": 0, 00:25:38.626 "format": 0, 00:25:38.626 "firmware": 0, 00:25:38.626 "ns_manage": 0 00:25:38.626 }, 00:25:38.626 "multi_ctrlr": true, 00:25:38.626 "ana_reporting": false 00:25:38.626 }, 00:25:38.626 "vs": { 00:25:38.626 "nvme_version": "1.3" 00:25:38.626 }, 00:25:38.626 "ns_data": { 00:25:38.626 "id": 1, 00:25:38.626 "can_share": true 00:25:38.626 } 00:25:38.626 } 00:25:38.626 ], 00:25:38.626 "mp_policy": "active_passive" 00:25:38.626 } 00:25:38.626 } 00:25:38.626 ] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GDB70hegXN 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GDB70hegXN 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.GDB70hegXN 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 [2024-12-09 10:36:16.158854] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:38.626 [2024-12-09 10:36:16.158946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 [2024-12-09 10:36:16.174903] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:38.626 nvme0n1 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 [ 00:25:38.626 { 00:25:38.626 "name": "nvme0n1", 00:25:38.626 "aliases": [ 00:25:38.626 "88caff73-c5a9-4efb-b8ab-33fda7a68621" 00:25:38.626 ], 00:25:38.626 "product_name": "NVMe disk", 00:25:38.626 "block_size": 512, 00:25:38.626 "num_blocks": 2097152, 00:25:38.626 "uuid": "88caff73-c5a9-4efb-b8ab-33fda7a68621", 00:25:38.626 "numa_id": 1, 00:25:38.626 "assigned_rate_limits": { 00:25:38.626 "rw_ios_per_sec": 0, 00:25:38.626 "rw_mbytes_per_sec": 0, 00:25:38.626 "r_mbytes_per_sec": 0, 00:25:38.626 "w_mbytes_per_sec": 0 00:25:38.626 }, 00:25:38.626 "claimed": false, 00:25:38.626 "zoned": false, 00:25:38.626 "supported_io_types": { 00:25:38.626 "read": true, 00:25:38.626 "write": true, 00:25:38.626 "unmap": false, 00:25:38.626 "flush": true, 00:25:38.626 "reset": true, 00:25:38.626 "nvme_admin": true, 00:25:38.626 "nvme_io": true, 00:25:38.626 "nvme_io_md": false, 00:25:38.626 "write_zeroes": true, 00:25:38.626 "zcopy": false, 00:25:38.626 "get_zone_info": false, 00:25:38.626 "zone_management": false, 00:25:38.626 "zone_append": false, 00:25:38.626 "compare": true, 00:25:38.626 "compare_and_write": true, 00:25:38.626 "abort": true, 00:25:38.626 "seek_hole": false, 00:25:38.626 "seek_data": false, 00:25:38.626 "copy": true, 00:25:38.626 "nvme_iov_md": false 00:25:38.626 }, 00:25:38.626 "memory_domains": [ 00:25:38.626 { 00:25:38.626 "dma_device_id": "system", 00:25:38.626 "dma_device_type": 1 00:25:38.626 } 00:25:38.626 ], 00:25:38.626 "driver_specific": { 00:25:38.626 "nvme": [ 00:25:38.626 { 00:25:38.626 "trid": { 00:25:38.626 "trtype": "TCP", 00:25:38.626 "adrfam": "IPv4", 00:25:38.626 "traddr": "10.0.0.2", 00:25:38.626 "trsvcid": "4421", 00:25:38.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:38.626 }, 00:25:38.626 "ctrlr_data": { 00:25:38.626 "cntlid": 3, 00:25:38.626 "vendor_id": "0x8086", 00:25:38.626 "model_number": "SPDK bdev Controller", 00:25:38.626 "serial_number": "00000000000000000000", 00:25:38.626 "firmware_revision": "25.01", 00:25:38.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.626 "oacs": { 00:25:38.626 "security": 0, 00:25:38.626 "format": 0, 00:25:38.626 "firmware": 0, 00:25:38.626 "ns_manage": 0 00:25:38.626 }, 00:25:38.626 "multi_ctrlr": true, 00:25:38.626 "ana_reporting": false 00:25:38.626 }, 00:25:38.626 "vs": { 00:25:38.626 "nvme_version": "1.3" 00:25:38.626 }, 00:25:38.626 "ns_data": { 00:25:38.626 "id": 1, 00:25:38.626 "can_share": true 00:25:38.626 } 00:25:38.626 } 00:25:38.626 ], 00:25:38.626 "mp_policy": "active_passive" 00:25:38.626 } 00:25:38.626 } 00:25:38.626 ] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.GDB70hegXN 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.626 rmmod nvme_tcp 00:25:38.626 rmmod nvme_fabrics 00:25:38.626 rmmod nvme_keyring 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2738922 ']' 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2738922 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2738922 ']' 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2738922 00:25:38.626 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:38.627 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738922 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738922' 00:25:38.885 killing process with pid 2738922 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2738922 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2738922 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.885 10:36:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:41.425 00:25:41.425 real 0m10.039s 00:25:41.425 user 0m3.837s 00:25:41.425 sys 0m4.821s 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:41.425 ************************************ 00:25:41.425 END TEST nvmf_async_init 00:25:41.425 ************************************ 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.425 ************************************ 00:25:41.425 START TEST dma 00:25:41.425 ************************************ 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:41.425 * Looking for test storage... 00:25:41.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.425 --rc genhtml_branch_coverage=1 00:25:41.425 --rc genhtml_function_coverage=1 00:25:41.425 --rc genhtml_legend=1 00:25:41.425 --rc geninfo_all_blocks=1 00:25:41.425 --rc geninfo_unexecuted_blocks=1 00:25:41.425 00:25:41.425 ' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.425 --rc genhtml_branch_coverage=1 00:25:41.425 --rc genhtml_function_coverage=1 00:25:41.425 --rc genhtml_legend=1 00:25:41.425 --rc geninfo_all_blocks=1 00:25:41.425 --rc geninfo_unexecuted_blocks=1 00:25:41.425 00:25:41.425 ' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.425 --rc genhtml_branch_coverage=1 00:25:41.425 --rc genhtml_function_coverage=1 00:25:41.425 --rc genhtml_legend=1 00:25:41.425 --rc geninfo_all_blocks=1 00:25:41.425 --rc geninfo_unexecuted_blocks=1 00:25:41.425 00:25:41.425 ' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.425 --rc genhtml_branch_coverage=1 00:25:41.425 --rc genhtml_function_coverage=1 00:25:41.425 --rc genhtml_legend=1 00:25:41.425 --rc geninfo_all_blocks=1 00:25:41.425 --rc geninfo_unexecuted_blocks=1 00:25:41.425 00:25:41.425 ' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:41.425 10:36:18 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:41.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:41.426 00:25:41.426 real 0m0.213s 00:25:41.426 user 0m0.123s 00:25:41.426 sys 0m0.103s 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:41.426 ************************************ 00:25:41.426 END TEST dma 00:25:41.426 ************************************ 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.426 ************************************ 00:25:41.426 START TEST nvmf_identify 00:25:41.426 ************************************ 00:25:41.426 10:36:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:41.426 * Looking for test storage... 00:25:41.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.426 --rc genhtml_branch_coverage=1 00:25:41.426 --rc genhtml_function_coverage=1 00:25:41.426 --rc genhtml_legend=1 00:25:41.426 --rc geninfo_all_blocks=1 00:25:41.426 --rc geninfo_unexecuted_blocks=1 00:25:41.426 00:25:41.426 ' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.426 --rc genhtml_branch_coverage=1 00:25:41.426 --rc genhtml_function_coverage=1 00:25:41.426 --rc genhtml_legend=1 00:25:41.426 --rc geninfo_all_blocks=1 00:25:41.426 --rc geninfo_unexecuted_blocks=1 00:25:41.426 00:25:41.426 ' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.426 --rc genhtml_branch_coverage=1 00:25:41.426 --rc genhtml_function_coverage=1 00:25:41.426 --rc genhtml_legend=1 00:25:41.426 --rc geninfo_all_blocks=1 00:25:41.426 --rc geninfo_unexecuted_blocks=1 00:25:41.426 00:25:41.426 ' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.426 --rc genhtml_branch_coverage=1 00:25:41.426 --rc genhtml_function_coverage=1 00:25:41.426 --rc genhtml_legend=1 00:25:41.426 --rc geninfo_all_blocks=1 00:25:41.426 --rc geninfo_unexecuted_blocks=1 00:25:41.426 00:25:41.426 ' 00:25:41.426 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.685 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:41.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:41.686 10:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:48.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:48.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:48.276 Found net devices under 0000:86:00.0: cvl_0_0 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:48.276 Found net devices under 0000:86:00.1: cvl_0_1 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.276 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.277 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:25:48.277 00:25:48.277 --- 10.0.0.2 ping statistics --- 00:25:48.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.277 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:25:48.277 00:25:48.277 --- 10.0.0.1 ping statistics --- 00:25:48.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.277 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2742751 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2742751 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2742751 ']' 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 [2024-12-09 10:36:25.167078] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:48.277 [2024-12-09 10:36:25.167126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.277 [2024-12-09 10:36:25.246451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.277 [2024-12-09 10:36:25.290386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.277 [2024-12-09 10:36:25.290424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.277 [2024-12-09 10:36:25.290433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.277 [2024-12-09 10:36:25.290440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.277 [2024-12-09 10:36:25.290445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.277 [2024-12-09 10:36:25.292003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.277 [2024-12-09 10:36:25.292116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.277 [2024-12-09 10:36:25.292245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.277 [2024-12-09 10:36:25.292246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 [2024-12-09 10:36:25.403003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 Malloc0 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.277 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.278 [2024-12-09 10:36:25.499728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.278 [ 00:25:48.278 { 00:25:48.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:48.278 "subtype": "Discovery", 00:25:48.278 "listen_addresses": [ 00:25:48.278 { 00:25:48.278 "trtype": "TCP", 00:25:48.278 "adrfam": "IPv4", 00:25:48.278 "traddr": "10.0.0.2", 00:25:48.278 "trsvcid": "4420" 00:25:48.278 } 00:25:48.278 ], 00:25:48.278 "allow_any_host": true, 00:25:48.278 "hosts": [] 00:25:48.278 }, 00:25:48.278 { 00:25:48.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.278 "subtype": "NVMe", 00:25:48.278 "listen_addresses": [ 00:25:48.278 { 00:25:48.278 "trtype": "TCP", 00:25:48.278 "adrfam": "IPv4", 00:25:48.278 "traddr": "10.0.0.2", 00:25:48.278 "trsvcid": "4420" 00:25:48.278 } 00:25:48.278 ], 00:25:48.278 "allow_any_host": true, 00:25:48.278 "hosts": [], 00:25:48.278 "serial_number": "SPDK00000000000001", 00:25:48.278 "model_number": "SPDK bdev Controller", 00:25:48.278 "max_namespaces": 32, 00:25:48.278 "min_cntlid": 1, 00:25:48.278 "max_cntlid": 65519, 00:25:48.278 "namespaces": [ 00:25:48.278 { 00:25:48.278 "nsid": 1, 00:25:48.278 "bdev_name": "Malloc0", 00:25:48.278 "name": "Malloc0", 00:25:48.278 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:48.278 "eui64": "ABCDEF0123456789", 00:25:48.278 "uuid": "8dd6bd0c-f86c-43e4-9416-e677bc440aba" 00:25:48.278 } 00:25:48.278 ] 00:25:48.278 } 00:25:48.278 ] 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.278 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:48.278 [2024-12-09 10:36:25.552933] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:48.278 [2024-12-09 10:36:25.552967] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742958 ] 00:25:48.278 [2024-12-09 10:36:25.592327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:48.278 [2024-12-09 10:36:25.592370] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:48.278 [2024-12-09 10:36:25.592375] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:48.278 [2024-12-09 10:36:25.592390] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:48.278 [2024-12-09 10:36:25.592399] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:48.278 [2024-12-09 10:36:25.596101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:48.278 [2024-12-09 10:36:25.596134] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ec9690 0 00:25:48.278 [2024-12-09 10:36:25.603817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:48.278 [2024-12-09 10:36:25.603832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:48.278 [2024-12-09 10:36:25.603840] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:48.278 [2024-12-09 10:36:25.603843] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:48.278 [2024-12-09 10:36:25.603876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.603882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.603885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.278 [2024-12-09 10:36:25.603896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:48.278 [2024-12-09 10:36:25.603914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.278 [2024-12-09 10:36:25.610817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.278 [2024-12-09 10:36:25.610826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.278 [2024-12-09 10:36:25.610829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.610833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.278 [2024-12-09 10:36:25.610843] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:48.278 [2024-12-09 10:36:25.610849] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:48.278 [2024-12-09 10:36:25.610853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:48.278 [2024-12-09 10:36:25.610868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.610871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.610875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.278 [2024-12-09 10:36:25.610881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.278 [2024-12-09 10:36:25.610894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.278 [2024-12-09 10:36:25.611038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.278 [2024-12-09 10:36:25.611044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.278 [2024-12-09 10:36:25.611047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.611050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.278 [2024-12-09 10:36:25.611057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:48.278 [2024-12-09 10:36:25.611064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:48.278 [2024-12-09 10:36:25.611073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.611077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.278 [2024-12-09 10:36:25.611080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.278 [2024-12-09 10:36:25.611086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.278 [2024-12-09 10:36:25.611096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.278 [2024-12-09 10:36:25.611158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.278 [2024-12-09 10:36:25.611164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.611166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.611174] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:48.279 [2024-12-09 10:36:25.611181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.279 [2024-12-09 10:36:25.611199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.279 [2024-12-09 10:36:25.611208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.279 [2024-12-09 10:36:25.611270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.279 [2024-12-09 10:36:25.611276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.611278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.611286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.279 [2024-12-09 10:36:25.611306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.279 [2024-12-09 10:36:25.611315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.279 [2024-12-09 10:36:25.611376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.279 [2024-12-09 10:36:25.611381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.611384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.611392] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:48.279 [2024-12-09 10:36:25.611396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611509] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:48.279 [2024-12-09 10:36:25.611513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.279 [2024-12-09 10:36:25.611532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.279 [2024-12-09 10:36:25.611541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.279 [2024-12-09 10:36:25.611604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.279 [2024-12-09 10:36:25.611609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.611612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.611619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:48.279 [2024-12-09 10:36:25.611627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.279 [2024-12-09 10:36:25.611639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.279 [2024-12-09 10:36:25.611648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.279 [2024-12-09 10:36:25.611714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.279 [2024-12-09 10:36:25.611720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.611723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.611729] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:48.279 [2024-12-09 10:36:25.611733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:48.279 [2024-12-09 10:36:25.611740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:48.279 [2024-12-09 10:36:25.611747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:48.279 [2024-12-09 10:36:25.611754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.279 [2024-12-09 10:36:25.611763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.279 [2024-12-09 10:36:25.611773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.279 [2024-12-09 10:36:25.611868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.279 [2024-12-09 10:36:25.611874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.279 [2024-12-09 10:36:25.611878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611885] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec9690): datao=0, datal=4096, cccid=0 00:25:48.279 [2024-12-09 10:36:25.611889] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2b100) on tqpair(0x1ec9690): expected_datao=0, payload_size=4096 00:25:48.279 [2024-12-09 10:36:25.611893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611899] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.611903] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.655813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.279 [2024-12-09 10:36:25.655823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.279 [2024-12-09 10:36:25.655826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.279 [2024-12-09 10:36:25.655830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.279 [2024-12-09 10:36:25.655840] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:48.279 [2024-12-09 10:36:25.655845] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:48.279 [2024-12-09 10:36:25.655849] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:48.279 [2024-12-09 10:36:25.655853] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:48.280 [2024-12-09 10:36:25.655857] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:48.280 [2024-12-09 10:36:25.655861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:48.280 [2024-12-09 10:36:25.655870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:48.280 [2024-12-09 10:36:25.655877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.655880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.655883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.655890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:48.280 [2024-12-09 10:36:25.655902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.280 [2024-12-09 10:36:25.656046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.280 [2024-12-09 10:36:25.656052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.280 [2024-12-09 10:36:25.656055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.280 [2024-12-09 10:36:25.656065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.280 [2024-12-09 10:36:25.656082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.280 [2024-12-09 10:36:25.656098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.280 [2024-12-09 10:36:25.656117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.280 [2024-12-09 10:36:25.656132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:48.280 [2024-12-09 10:36:25.656142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:48.280 [2024-12-09 10:36:25.656148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.280 [2024-12-09 10:36:25.656168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b100, cid 0, qid 0 00:25:48.280 [2024-12-09 10:36:25.656172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b280, cid 1, qid 0 00:25:48.280 [2024-12-09 10:36:25.656176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b400, cid 2, qid 0 00:25:48.280 [2024-12-09 10:36:25.656180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.280 [2024-12-09 10:36:25.656184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b700, cid 4, qid 0 00:25:48.280 [2024-12-09 10:36:25.656280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.280 [2024-12-09 10:36:25.656286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.280 [2024-12-09 10:36:25.656289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b700) on tqpair=0x1ec9690 00:25:48.280 [2024-12-09 10:36:25.656297] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:48.280 [2024-12-09 10:36:25.656301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:48.280 [2024-12-09 10:36:25.656310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.280 [2024-12-09 10:36:25.656328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b700, cid 4, qid 0 00:25:48.280 [2024-12-09 10:36:25.656402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.280 [2024-12-09 10:36:25.656408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.280 [2024-12-09 10:36:25.656411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656414] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec9690): datao=0, datal=4096, cccid=4 00:25:48.280 [2024-12-09 10:36:25.656418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2b700) on tqpair(0x1ec9690): expected_datao=0, payload_size=4096 00:25:48.280 [2024-12-09 10:36:25.656424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656433] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.280 [2024-12-09 10:36:25.656448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.280 [2024-12-09 10:36:25.656451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b700) on tqpair=0x1ec9690 00:25:48.280 [2024-12-09 10:36:25.656464] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:48.280 [2024-12-09 10:36:25.656484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.280 [2024-12-09 10:36:25.656499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec9690) 00:25:48.280 [2024-12-09 10:36:25.656511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.280 [2024-12-09 10:36:25.656523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b700, cid 4, qid 0 00:25:48.280 [2024-12-09 10:36:25.656528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b880, cid 5, qid 0 00:25:48.280 [2024-12-09 10:36:25.656623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.280 [2024-12-09 10:36:25.656629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.280 [2024-12-09 10:36:25.656632] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656635] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec9690): datao=0, datal=1024, cccid=4 00:25:48.280 [2024-12-09 10:36:25.656639] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2b700) on tqpair(0x1ec9690): expected_datao=0, payload_size=1024 00:25:48.280 [2024-12-09 10:36:25.656642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656648] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.280 [2024-12-09 10:36:25.656656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.280 [2024-12-09 10:36:25.656660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.280 [2024-12-09 10:36:25.656663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.656666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b880) on tqpair=0x1ec9690 00:25:48.281 [2024-12-09 10:36:25.697943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.281 [2024-12-09 10:36:25.697954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.281 [2024-12-09 10:36:25.697957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.697960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b700) on tqpair=0x1ec9690 00:25:48.281 [2024-12-09 10:36:25.697970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.697974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec9690) 00:25:48.281 [2024-12-09 10:36:25.697980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-12-09 10:36:25.697999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b700, cid 4, qid 0 00:25:48.281 [2024-12-09 10:36:25.698072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.281 [2024-12-09 10:36:25.698077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.281 [2024-12-09 10:36:25.698080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.698084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec9690): datao=0, datal=3072, cccid=4 00:25:48.281 [2024-12-09 10:36:25.698087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2b700) on tqpair(0x1ec9690): expected_datao=0, payload_size=3072 00:25:48.281 [2024-12-09 10:36:25.698091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.698105] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.698109] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.741817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.281 [2024-12-09 10:36:25.741827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.281 [2024-12-09 10:36:25.741830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.741834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b700) on tqpair=0x1ec9690 00:25:48.281 [2024-12-09 10:36:25.741843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.741846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec9690) 00:25:48.281 [2024-12-09 10:36:25.741852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.281 [2024-12-09 10:36:25.741868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b700, cid 4, qid 0 00:25:48.281 [2024-12-09 10:36:25.741982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.281 [2024-12-09 10:36:25.741987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.281 [2024-12-09 10:36:25.741990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.741993] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec9690): datao=0, datal=8, cccid=4 00:25:48.281 [2024-12-09 10:36:25.741997] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2b700) on tqpair(0x1ec9690): expected_datao=0, payload_size=8 00:25:48.281 [2024-12-09 10:36:25.742001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.742006] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.742009] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.783922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.281 [2024-12-09 10:36:25.783930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.281 [2024-12-09 10:36:25.783933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.281 [2024-12-09 10:36:25.783937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b700) on tqpair=0x1ec9690 00:25:48.281 ===================================================== 00:25:48.281 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:48.281 ===================================================== 00:25:48.281 Controller Capabilities/Features 00:25:48.281 ================================ 00:25:48.281 Vendor ID: 0000 00:25:48.281 Subsystem Vendor ID: 0000 00:25:48.281 Serial Number: .................... 00:25:48.281 Model Number: ........................................ 00:25:48.281 Firmware Version: 25.01 00:25:48.281 Recommended Arb Burst: 0 00:25:48.281 IEEE OUI Identifier: 00 00 00 00:25:48.281 Multi-path I/O 00:25:48.281 May have multiple subsystem ports: No 00:25:48.281 May have multiple controllers: No 00:25:48.281 Associated with SR-IOV VF: No 00:25:48.281 Max Data Transfer Size: 131072 00:25:48.281 Max Number of Namespaces: 0 00:25:48.281 Max Number of I/O Queues: 1024 00:25:48.281 NVMe Specification Version (VS): 1.3 00:25:48.281 NVMe Specification Version (Identify): 1.3 00:25:48.281 Maximum Queue Entries: 128 00:25:48.281 Contiguous Queues Required: Yes 00:25:48.281 Arbitration Mechanisms Supported 00:25:48.281 Weighted Round Robin: Not Supported 00:25:48.281 Vendor Specific: Not Supported 00:25:48.281 Reset Timeout: 15000 ms 00:25:48.281 Doorbell Stride: 4 bytes 00:25:48.281 NVM Subsystem Reset: Not Supported 00:25:48.281 Command Sets Supported 00:25:48.281 NVM Command Set: Supported 00:25:48.281 Boot Partition: Not Supported 00:25:48.281 Memory Page Size Minimum: 4096 bytes 00:25:48.281 Memory Page Size Maximum: 4096 bytes 00:25:48.281 Persistent Memory Region: Not Supported 00:25:48.281 Optional Asynchronous Events Supported 00:25:48.281 Namespace Attribute Notices: Not Supported 00:25:48.281 Firmware Activation Notices: Not Supported 00:25:48.281 ANA Change Notices: Not Supported 00:25:48.281 PLE Aggregate Log Change Notices: Not Supported 00:25:48.281 LBA Status Info Alert Notices: Not Supported 00:25:48.281 EGE Aggregate Log Change Notices: Not Supported 00:25:48.281 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.281 Zone Descriptor Change Notices: Not Supported 00:25:48.281 Discovery Log Change Notices: Supported 00:25:48.281 Controller Attributes 00:25:48.281 128-bit Host Identifier: Not Supported 00:25:48.281 Non-Operational Permissive Mode: Not Supported 00:25:48.281 NVM Sets: Not Supported 00:25:48.281 Read Recovery Levels: Not Supported 00:25:48.281 Endurance Groups: Not Supported 00:25:48.281 Predictable Latency Mode: Not Supported 00:25:48.281 Traffic Based Keep ALive: Not Supported 00:25:48.281 Namespace Granularity: Not Supported 00:25:48.281 SQ Associations: Not Supported 00:25:48.281 UUID List: Not Supported 00:25:48.281 Multi-Domain Subsystem: Not Supported 00:25:48.281 Fixed Capacity Management: Not Supported 00:25:48.281 Variable Capacity Management: Not Supported 00:25:48.281 Delete Endurance Group: Not Supported 00:25:48.281 Delete NVM Set: Not Supported 00:25:48.281 Extended LBA Formats Supported: Not Supported 00:25:48.281 Flexible Data Placement Supported: Not Supported 00:25:48.281 00:25:48.281 Controller Memory Buffer Support 00:25:48.281 ================================ 00:25:48.281 Supported: No 00:25:48.281 00:25:48.281 Persistent Memory Region Support 00:25:48.281 ================================ 00:25:48.282 Supported: No 00:25:48.282 00:25:48.282 Admin Command Set Attributes 00:25:48.282 ============================ 00:25:48.282 Security Send/Receive: Not Supported 00:25:48.282 Format NVM: Not Supported 00:25:48.282 Firmware Activate/Download: Not Supported 00:25:48.282 Namespace Management: Not Supported 00:25:48.282 Device Self-Test: Not Supported 00:25:48.282 Directives: Not Supported 00:25:48.282 NVMe-MI: Not Supported 00:25:48.282 Virtualization Management: Not Supported 00:25:48.282 Doorbell Buffer Config: Not Supported 00:25:48.282 Get LBA Status Capability: Not Supported 00:25:48.282 Command & Feature Lockdown Capability: Not Supported 00:25:48.282 Abort Command Limit: 1 00:25:48.282 Async Event Request Limit: 4 00:25:48.282 Number of Firmware Slots: N/A 00:25:48.282 Firmware Slot 1 Read-Only: N/A 00:25:48.282 Firmware Activation Without Reset: N/A 00:25:48.282 Multiple Update Detection Support: N/A 00:25:48.282 Firmware Update Granularity: No Information Provided 00:25:48.282 Per-Namespace SMART Log: No 00:25:48.282 Asymmetric Namespace Access Log Page: Not Supported 00:25:48.282 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:48.282 Command Effects Log Page: Not Supported 00:25:48.282 Get Log Page Extended Data: Supported 00:25:48.282 Telemetry Log Pages: Not Supported 00:25:48.282 Persistent Event Log Pages: Not Supported 00:25:48.282 Supported Log Pages Log Page: May Support 00:25:48.282 Commands Supported & Effects Log Page: Not Supported 00:25:48.282 Feature Identifiers & Effects Log Page:May Support 00:25:48.282 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.282 Data Area 4 for Telemetry Log: Not Supported 00:25:48.282 Error Log Page Entries Supported: 128 00:25:48.282 Keep Alive: Not Supported 00:25:48.282 00:25:48.282 NVM Command Set Attributes 00:25:48.282 ========================== 00:25:48.282 Submission Queue Entry Size 00:25:48.282 Max: 1 00:25:48.282 Min: 1 00:25:48.282 Completion Queue Entry Size 00:25:48.282 Max: 1 00:25:48.282 Min: 1 00:25:48.282 Number of Namespaces: 0 00:25:48.282 Compare Command: Not Supported 00:25:48.282 Write Uncorrectable Command: Not Supported 00:25:48.282 Dataset Management Command: Not Supported 00:25:48.282 Write Zeroes Command: Not Supported 00:25:48.282 Set Features Save Field: Not Supported 00:25:48.282 Reservations: Not Supported 00:25:48.282 Timestamp: Not Supported 00:25:48.282 Copy: Not Supported 00:25:48.282 Volatile Write Cache: Not Present 00:25:48.282 Atomic Write Unit (Normal): 1 00:25:48.282 Atomic Write Unit (PFail): 1 00:25:48.282 Atomic Compare & Write Unit: 1 00:25:48.282 Fused Compare & Write: Supported 00:25:48.282 Scatter-Gather List 00:25:48.282 SGL Command Set: Supported 00:25:48.282 SGL Keyed: Supported 00:25:48.282 SGL Bit Bucket Descriptor: Not Supported 00:25:48.282 SGL Metadata Pointer: Not Supported 00:25:48.282 Oversized SGL: Not Supported 00:25:48.282 SGL Metadata Address: Not Supported 00:25:48.282 SGL Offset: Supported 00:25:48.282 Transport SGL Data Block: Not Supported 00:25:48.282 Replay Protected Memory Block: Not Supported 00:25:48.282 00:25:48.282 Firmware Slot Information 00:25:48.282 ========================= 00:25:48.282 Active slot: 0 00:25:48.282 00:25:48.282 00:25:48.282 Error Log 00:25:48.282 ========= 00:25:48.282 00:25:48.282 Active Namespaces 00:25:48.282 ================= 00:25:48.282 Discovery Log Page 00:25:48.282 ================== 00:25:48.282 Generation Counter: 2 00:25:48.282 Number of Records: 2 00:25:48.282 Record Format: 0 00:25:48.282 00:25:48.282 Discovery Log Entry 0 00:25:48.282 ---------------------- 00:25:48.282 Transport Type: 3 (TCP) 00:25:48.282 Address Family: 1 (IPv4) 00:25:48.282 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:48.282 Entry Flags: 00:25:48.282 Duplicate Returned Information: 1 00:25:48.282 Explicit Persistent Connection Support for Discovery: 1 00:25:48.282 Transport Requirements: 00:25:48.282 Secure Channel: Not Required 00:25:48.282 Port ID: 0 (0x0000) 00:25:48.282 Controller ID: 65535 (0xffff) 00:25:48.282 Admin Max SQ Size: 128 00:25:48.282 Transport Service Identifier: 4420 00:25:48.282 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:48.282 Transport Address: 10.0.0.2 00:25:48.282 Discovery Log Entry 1 00:25:48.282 ---------------------- 00:25:48.282 Transport Type: 3 (TCP) 00:25:48.282 Address Family: 1 (IPv4) 00:25:48.282 Subsystem Type: 2 (NVM Subsystem) 00:25:48.282 Entry Flags: 00:25:48.282 Duplicate Returned Information: 0 00:25:48.282 Explicit Persistent Connection Support for Discovery: 0 00:25:48.282 Transport Requirements: 00:25:48.282 Secure Channel: Not Required 00:25:48.282 Port ID: 0 (0x0000) 00:25:48.282 Controller ID: 65535 (0xffff) 00:25:48.282 Admin Max SQ Size: 128 00:25:48.282 Transport Service Identifier: 4420 00:25:48.283 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:48.283 Transport Address: 10.0.0.2 [2024-12-09 10:36:25.784019] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:48.283 [2024-12-09 10:36:25.784030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b100) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.283 [2024-12-09 10:36:25.784041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b280) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.283 [2024-12-09 10:36:25.784049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b400) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.283 [2024-12-09 10:36:25.784059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.283 [2024-12-09 10:36:25.784072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784288] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:48.283 [2024-12-09 10:36:25.784292] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:48.283 [2024-12-09 10:36:25.784300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.283 [2024-12-09 10:36:25.784734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.283 [2024-12-09 10:36:25.784743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.283 [2024-12-09 10:36:25.784800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.283 [2024-12-09 10:36:25.784806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.283 [2024-12-09 10:36:25.784814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.283 [2024-12-09 10:36:25.784825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.283 [2024-12-09 10:36:25.784828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.784831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.784837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.784846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.784909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.784915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.784918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.784921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.784928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.784932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.784935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.784941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.784949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.284 [2024-12-09 10:36:25.785663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.284 [2024-12-09 10:36:25.785673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.284 [2024-12-09 10:36:25.785733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.284 [2024-12-09 10:36:25.785739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.284 [2024-12-09 10:36:25.785743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.284 [2024-12-09 10:36:25.785754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.284 [2024-12-09 10:36:25.785761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.285 [2024-12-09 10:36:25.785766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.785775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.285 [2024-12-09 10:36:25.785847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.785853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.785856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.285 [2024-12-09 10:36:25.785867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.285 [2024-12-09 10:36:25.785879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.785889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.285 [2024-12-09 10:36:25.785946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.785952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.785955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.285 [2024-12-09 10:36:25.785965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.785972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.285 [2024-12-09 10:36:25.785977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.785986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.285 [2024-12-09 10:36:25.786045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.786051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.786054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.786057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.285 [2024-12-09 10:36:25.786064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.786068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.786071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.285 [2024-12-09 10:36:25.786076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.786085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.285 [2024-12-09 10:36:25.789814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.789822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.789825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.789830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.285 [2024-12-09 10:36:25.789840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.789844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.789846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec9690) 00:25:48.285 [2024-12-09 10:36:25.789852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.789863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2b580, cid 3, qid 0 00:25:48.285 [2024-12-09 10:36:25.790020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.790025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.790028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.790031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2b580) on tqpair=0x1ec9690 00:25:48.285 [2024-12-09 10:36:25.790039] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:48.285 00:25:48.285 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:48.285 [2024-12-09 10:36:25.827380] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:48.285 [2024-12-09 10:36:25.827414] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742995 ] 00:25:48.285 [2024-12-09 10:36:25.868143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:48.285 [2024-12-09 10:36:25.868187] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:48.285 [2024-12-09 10:36:25.868192] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:48.285 [2024-12-09 10:36:25.868207] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:48.285 [2024-12-09 10:36:25.868216] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:48.285 [2024-12-09 10:36:25.868732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:48.285 [2024-12-09 10:36:25.868763] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x88f690 0 00:25:48.285 [2024-12-09 10:36:25.878822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:48.285 [2024-12-09 10:36:25.878835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:48.285 [2024-12-09 10:36:25.878842] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:48.285 [2024-12-09 10:36:25.878845] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:48.285 [2024-12-09 10:36:25.878876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.878881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.878884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.285 [2024-12-09 10:36:25.878893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:48.285 [2024-12-09 10:36:25.878909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.285 [2024-12-09 10:36:25.886820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.886831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.285 [2024-12-09 10:36:25.886835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.886839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.285 [2024-12-09 10:36:25.886849] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:48.285 [2024-12-09 10:36:25.886856] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:48.285 [2024-12-09 10:36:25.886860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:48.285 [2024-12-09 10:36:25.886873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.886877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.285 [2024-12-09 10:36:25.886880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.285 [2024-12-09 10:36:25.886886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.285 [2024-12-09 10:36:25.886899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.285 [2024-12-09 10:36:25.887038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.285 [2024-12-09 10:36:25.887044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:48.286 [2024-12-09 10:36:25.887064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:48.286 [2024-12-09 10:36:25.887071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:48.286 [2024-12-09 10:36:25.887176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887403] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:48.286 [2024-12-09 10:36:25.887407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887519] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:48.286 [2024-12-09 10:36:25.887523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:48.286 [2024-12-09 10:36:25.887657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887759] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:48.286 [2024-12-09 10:36:25.887764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:48.286 [2024-12-09 10:36:25.887770] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:48.286 [2024-12-09 10:36:25.887782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:48.286 [2024-12-09 10:36:25.887790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.286 [2024-12-09 10:36:25.887799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.286 [2024-12-09 10:36:25.887814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.286 [2024-12-09 10:36:25.887898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.286 [2024-12-09 10:36:25.887903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.286 [2024-12-09 10:36:25.887906] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887909] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=4096, cccid=0 00:25:48.286 [2024-12-09 10:36:25.887913] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1100) on tqpair(0x88f690): expected_datao=0, payload_size=4096 00:25:48.286 [2024-12-09 10:36:25.887917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887931] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887934] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.286 [2024-12-09 10:36:25.887983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.286 [2024-12-09 10:36:25.887986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.286 [2024-12-09 10:36:25.887989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.286 [2024-12-09 10:36:25.887998] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:48.287 [2024-12-09 10:36:25.888002] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:48.287 [2024-12-09 10:36:25.888006] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:48.287 [2024-12-09 10:36:25.888009] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:48.287 [2024-12-09 10:36:25.888013] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:48.287 [2024-12-09 10:36:25.888017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:48.287 [2024-12-09 10:36:25.888064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.287 [2024-12-09 10:36:25.888126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.287 [2024-12-09 10:36:25.888131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.287 [2024-12-09 10:36:25.888134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.287 [2024-12-09 10:36:25.888143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.287 [2024-12-09 10:36:25.888160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.287 [2024-12-09 10:36:25.888176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.287 [2024-12-09 10:36:25.888192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.287 [2024-12-09 10:36:25.888207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.287 [2024-12-09 10:36:25.888242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1100, cid 0, qid 0 00:25:48.287 [2024-12-09 10:36:25.888247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1280, cid 1, qid 0 00:25:48.287 [2024-12-09 10:36:25.888251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1400, cid 2, qid 0 00:25:48.287 [2024-12-09 10:36:25.888255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.287 [2024-12-09 10:36:25.888259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.287 [2024-12-09 10:36:25.888351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.287 [2024-12-09 10:36:25.888360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.287 [2024-12-09 10:36:25.888363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.287 [2024-12-09 10:36:25.888370] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:48.287 [2024-12-09 10:36:25.888374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:48.287 [2024-12-09 10:36:25.888413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.287 [2024-12-09 10:36:25.888477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.287 [2024-12-09 10:36:25.888482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.287 [2024-12-09 10:36:25.888485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.287 [2024-12-09 10:36:25.888540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:48.287 [2024-12-09 10:36:25.888556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.287 [2024-12-09 10:36:25.888565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.287 [2024-12-09 10:36:25.888574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.287 [2024-12-09 10:36:25.888647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.287 [2024-12-09 10:36:25.888653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.287 [2024-12-09 10:36:25.888656] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888659] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=4096, cccid=4 00:25:48.287 [2024-12-09 10:36:25.888662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1700) on tqpair(0x88f690): expected_datao=0, payload_size=4096 00:25:48.287 [2024-12-09 10:36:25.888666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888678] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.888682] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.287 [2024-12-09 10:36:25.929950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.287 [2024-12-09 10:36:25.929963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.287 [2024-12-09 10:36:25.929966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.929970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.288 [2024-12-09 10:36:25.929984] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:48.288 [2024-12-09 10:36:25.929995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:48.288 [2024-12-09 10:36:25.930004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:48.288 [2024-12-09 10:36:25.930011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.930015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.288 [2024-12-09 10:36:25.930022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.288 [2024-12-09 10:36:25.930035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.288 [2024-12-09 10:36:25.930129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.288 [2024-12-09 10:36:25.930135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.288 [2024-12-09 10:36:25.930138] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.930141] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=4096, cccid=4 00:25:48.288 [2024-12-09 10:36:25.930145] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1700) on tqpair(0x88f690): expected_datao=0, payload_size=4096 00:25:48.288 [2024-12-09 10:36:25.930149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.930159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.930163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.971924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.288 [2024-12-09 10:36:25.971933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.288 [2024-12-09 10:36:25.971936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.971940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.288 [2024-12-09 10:36:25.971954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:48.288 [2024-12-09 10:36:25.971963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:48.288 [2024-12-09 10:36:25.971970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.971973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.288 [2024-12-09 10:36:25.971980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.288 [2024-12-09 10:36:25.971992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.288 [2024-12-09 10:36:25.972067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.288 [2024-12-09 10:36:25.972073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.288 [2024-12-09 10:36:25.972076] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.972079] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=4096, cccid=4 00:25:48.288 [2024-12-09 10:36:25.972083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1700) on tqpair(0x88f690): expected_datao=0, payload_size=4096 00:25:48.288 [2024-12-09 10:36:25.972087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.972097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.288 [2024-12-09 10:36:25.972101] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.014816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.014825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.014828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.014831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.014839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014878] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:48.549 [2024-12-09 10:36:26.014882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:48.549 [2024-12-09 10:36:26.014887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:48.549 [2024-12-09 10:36:26.014901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.014905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.014911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.014917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.014920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.014924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.014929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.549 [2024-12-09 10:36:26.014943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.549 [2024-12-09 10:36:26.014948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1880, cid 5, qid 0 00:25:48.549 [2024-12-09 10:36:26.015033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1880) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1880, cid 5, qid 0 00:25:48.549 [2024-12-09 10:36:26.015161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1880) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1880, cid 5, qid 0 00:25:48.549 [2024-12-09 10:36:26.015258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1880) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1880, cid 5, qid 0 00:25:48.549 [2024-12-09 10:36:26.015356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1880) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x88f690) 00:25:48.549 [2024-12-09 10:36:26.015436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.549 [2024-12-09 10:36:26.015448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1880, cid 5, qid 0 00:25:48.549 [2024-12-09 10:36:26.015453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1700, cid 4, qid 0 00:25:48.549 [2024-12-09 10:36:26.015457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1a00, cid 6, qid 0 00:25:48.549 [2024-12-09 10:36:26.015461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1b80, cid 7, qid 0 00:25:48.549 [2024-12-09 10:36:26.015599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.549 [2024-12-09 10:36:26.015605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.549 [2024-12-09 10:36:26.015608] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=8192, cccid=5 00:25:48.549 [2024-12-09 10:36:26.015615] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1880) on tqpair(0x88f690): expected_datao=0, payload_size=8192 00:25:48.549 [2024-12-09 10:36:26.015619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015645] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.549 [2024-12-09 10:36:26.015658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.549 [2024-12-09 10:36:26.015660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015664] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=512, cccid=4 00:25:48.549 [2024-12-09 10:36:26.015667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1700) on tqpair(0x88f690): expected_datao=0, payload_size=512 00:25:48.549 [2024-12-09 10:36:26.015671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015679] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.549 [2024-12-09 10:36:26.015689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.549 [2024-12-09 10:36:26.015692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015694] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=512, cccid=6 00:25:48.549 [2024-12-09 10:36:26.015698] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1a00) on tqpair(0x88f690): expected_datao=0, payload_size=512 00:25:48.549 [2024-12-09 10:36:26.015702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015710] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:48.549 [2024-12-09 10:36:26.015719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:48.549 [2024-12-09 10:36:26.015722] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015725] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88f690): datao=0, datal=4096, cccid=7 00:25:48.549 [2024-12-09 10:36:26.015729] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f1b80) on tqpair(0x88f690): expected_datao=0, payload_size=4096 00:25:48.549 [2024-12-09 10:36:26.015733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015738] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015741] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1880) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1700) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1a00) on tqpair=0x88f690 00:25:48.549 [2024-12-09 10:36:26.015813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.549 [2024-12-09 10:36:26.015819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.549 [2024-12-09 10:36:26.015821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.549 [2024-12-09 10:36:26.015825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1b80) on tqpair=0x88f690 00:25:48.549 ===================================================== 00:25:48.549 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.549 ===================================================== 00:25:48.549 Controller Capabilities/Features 00:25:48.549 ================================ 00:25:48.549 Vendor ID: 8086 00:25:48.549 Subsystem Vendor ID: 8086 00:25:48.549 Serial Number: SPDK00000000000001 00:25:48.549 Model Number: SPDK bdev Controller 00:25:48.549 Firmware Version: 25.01 00:25:48.549 Recommended Arb Burst: 6 00:25:48.549 IEEE OUI Identifier: e4 d2 5c 00:25:48.549 Multi-path I/O 00:25:48.549 May have multiple subsystem ports: Yes 00:25:48.549 May have multiple controllers: Yes 00:25:48.549 Associated with SR-IOV VF: No 00:25:48.549 Max Data Transfer Size: 131072 00:25:48.549 Max Number of Namespaces: 32 00:25:48.549 Max Number of I/O Queues: 127 00:25:48.549 NVMe Specification Version (VS): 1.3 00:25:48.549 NVMe Specification Version (Identify): 1.3 00:25:48.549 Maximum Queue Entries: 128 00:25:48.549 Contiguous Queues Required: Yes 00:25:48.549 Arbitration Mechanisms Supported 00:25:48.549 Weighted Round Robin: Not Supported 00:25:48.549 Vendor Specific: Not Supported 00:25:48.549 Reset Timeout: 15000 ms 00:25:48.549 Doorbell Stride: 4 bytes 00:25:48.549 NVM Subsystem Reset: Not Supported 00:25:48.549 Command Sets Supported 00:25:48.549 NVM Command Set: Supported 00:25:48.549 Boot Partition: Not Supported 00:25:48.549 Memory Page Size Minimum: 4096 bytes 00:25:48.549 Memory Page Size Maximum: 4096 bytes 00:25:48.549 Persistent Memory Region: Not Supported 00:25:48.549 Optional Asynchronous Events Supported 00:25:48.549 Namespace Attribute Notices: Supported 00:25:48.549 Firmware Activation Notices: Not Supported 00:25:48.549 ANA Change Notices: Not Supported 00:25:48.549 PLE Aggregate Log Change Notices: Not Supported 00:25:48.549 LBA Status Info Alert Notices: Not Supported 00:25:48.549 EGE Aggregate Log Change Notices: Not Supported 00:25:48.549 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.549 Zone Descriptor Change Notices: Not Supported 00:25:48.549 Discovery Log Change Notices: Not Supported 00:25:48.549 Controller Attributes 00:25:48.549 128-bit Host Identifier: Supported 00:25:48.549 Non-Operational Permissive Mode: Not Supported 00:25:48.549 NVM Sets: Not Supported 00:25:48.549 Read Recovery Levels: Not Supported 00:25:48.549 Endurance Groups: Not Supported 00:25:48.549 Predictable Latency Mode: Not Supported 00:25:48.549 Traffic Based Keep ALive: Not Supported 00:25:48.549 Namespace Granularity: Not Supported 00:25:48.549 SQ Associations: Not Supported 00:25:48.549 UUID List: Not Supported 00:25:48.549 Multi-Domain Subsystem: Not Supported 00:25:48.549 Fixed Capacity Management: Not Supported 00:25:48.549 Variable Capacity Management: Not Supported 00:25:48.549 Delete Endurance Group: Not Supported 00:25:48.549 Delete NVM Set: Not Supported 00:25:48.549 Extended LBA Formats Supported: Not Supported 00:25:48.549 Flexible Data Placement Supported: Not Supported 00:25:48.549 00:25:48.549 Controller Memory Buffer Support 00:25:48.549 ================================ 00:25:48.549 Supported: No 00:25:48.550 00:25:48.550 Persistent Memory Region Support 00:25:48.550 ================================ 00:25:48.550 Supported: No 00:25:48.550 00:25:48.550 Admin Command Set Attributes 00:25:48.550 ============================ 00:25:48.550 Security Send/Receive: Not Supported 00:25:48.550 Format NVM: Not Supported 00:25:48.550 Firmware Activate/Download: Not Supported 00:25:48.550 Namespace Management: Not Supported 00:25:48.550 Device Self-Test: Not Supported 00:25:48.550 Directives: Not Supported 00:25:48.550 NVMe-MI: Not Supported 00:25:48.550 Virtualization Management: Not Supported 00:25:48.550 Doorbell Buffer Config: Not Supported 00:25:48.550 Get LBA Status Capability: Not Supported 00:25:48.550 Command & Feature Lockdown Capability: Not Supported 00:25:48.550 Abort Command Limit: 4 00:25:48.550 Async Event Request Limit: 4 00:25:48.550 Number of Firmware Slots: N/A 00:25:48.550 Firmware Slot 1 Read-Only: N/A 00:25:48.550 Firmware Activation Without Reset: N/A 00:25:48.550 Multiple Update Detection Support: N/A 00:25:48.550 Firmware Update Granularity: No Information Provided 00:25:48.550 Per-Namespace SMART Log: No 00:25:48.550 Asymmetric Namespace Access Log Page: Not Supported 00:25:48.550 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:48.550 Command Effects Log Page: Supported 00:25:48.550 Get Log Page Extended Data: Supported 00:25:48.550 Telemetry Log Pages: Not Supported 00:25:48.550 Persistent Event Log Pages: Not Supported 00:25:48.550 Supported Log Pages Log Page: May Support 00:25:48.550 Commands Supported & Effects Log Page: Not Supported 00:25:48.550 Feature Identifiers & Effects Log Page:May Support 00:25:48.550 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.550 Data Area 4 for Telemetry Log: Not Supported 00:25:48.550 Error Log Page Entries Supported: 128 00:25:48.550 Keep Alive: Supported 00:25:48.550 Keep Alive Granularity: 10000 ms 00:25:48.550 00:25:48.550 NVM Command Set Attributes 00:25:48.550 ========================== 00:25:48.550 Submission Queue Entry Size 00:25:48.550 Max: 64 00:25:48.550 Min: 64 00:25:48.550 Completion Queue Entry Size 00:25:48.550 Max: 16 00:25:48.550 Min: 16 00:25:48.550 Number of Namespaces: 32 00:25:48.550 Compare Command: Supported 00:25:48.550 Write Uncorrectable Command: Not Supported 00:25:48.550 Dataset Management Command: Supported 00:25:48.550 Write Zeroes Command: Supported 00:25:48.550 Set Features Save Field: Not Supported 00:25:48.550 Reservations: Supported 00:25:48.550 Timestamp: Not Supported 00:25:48.550 Copy: Supported 00:25:48.550 Volatile Write Cache: Present 00:25:48.550 Atomic Write Unit (Normal): 1 00:25:48.550 Atomic Write Unit (PFail): 1 00:25:48.550 Atomic Compare & Write Unit: 1 00:25:48.550 Fused Compare & Write: Supported 00:25:48.550 Scatter-Gather List 00:25:48.550 SGL Command Set: Supported 00:25:48.550 SGL Keyed: Supported 00:25:48.550 SGL Bit Bucket Descriptor: Not Supported 00:25:48.550 SGL Metadata Pointer: Not Supported 00:25:48.550 Oversized SGL: Not Supported 00:25:48.550 SGL Metadata Address: Not Supported 00:25:48.550 SGL Offset: Supported 00:25:48.550 Transport SGL Data Block: Not Supported 00:25:48.550 Replay Protected Memory Block: Not Supported 00:25:48.550 00:25:48.550 Firmware Slot Information 00:25:48.550 ========================= 00:25:48.550 Active slot: 1 00:25:48.550 Slot 1 Firmware Revision: 25.01 00:25:48.550 00:25:48.550 00:25:48.550 Commands Supported and Effects 00:25:48.550 ============================== 00:25:48.550 Admin Commands 00:25:48.550 -------------- 00:25:48.550 Get Log Page (02h): Supported 00:25:48.550 Identify (06h): Supported 00:25:48.550 Abort (08h): Supported 00:25:48.550 Set Features (09h): Supported 00:25:48.550 Get Features (0Ah): Supported 00:25:48.550 Asynchronous Event Request (0Ch): Supported 00:25:48.550 Keep Alive (18h): Supported 00:25:48.550 I/O Commands 00:25:48.550 ------------ 00:25:48.550 Flush (00h): Supported LBA-Change 00:25:48.550 Write (01h): Supported LBA-Change 00:25:48.550 Read (02h): Supported 00:25:48.550 Compare (05h): Supported 00:25:48.550 Write Zeroes (08h): Supported LBA-Change 00:25:48.550 Dataset Management (09h): Supported LBA-Change 00:25:48.550 Copy (19h): Supported LBA-Change 00:25:48.550 00:25:48.550 Error Log 00:25:48.550 ========= 00:25:48.550 00:25:48.550 Arbitration 00:25:48.550 =========== 00:25:48.550 Arbitration Burst: 1 00:25:48.550 00:25:48.550 Power Management 00:25:48.550 ================ 00:25:48.550 Number of Power States: 1 00:25:48.550 Current Power State: Power State #0 00:25:48.550 Power State #0: 00:25:48.550 Max Power: 0.00 W 00:25:48.550 Non-Operational State: Operational 00:25:48.550 Entry Latency: Not Reported 00:25:48.550 Exit Latency: Not Reported 00:25:48.550 Relative Read Throughput: 0 00:25:48.550 Relative Read Latency: 0 00:25:48.550 Relative Write Throughput: 0 00:25:48.550 Relative Write Latency: 0 00:25:48.550 Idle Power: Not Reported 00:25:48.550 Active Power: Not Reported 00:25:48.550 Non-Operational Permissive Mode: Not Supported 00:25:48.550 00:25:48.550 Health Information 00:25:48.550 ================== 00:25:48.550 Critical Warnings: 00:25:48.550 Available Spare Space: OK 00:25:48.550 Temperature: OK 00:25:48.550 Device Reliability: OK 00:25:48.550 Read Only: No 00:25:48.550 Volatile Memory Backup: OK 00:25:48.550 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:48.550 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:48.550 Available Spare: 0% 00:25:48.550 Available Spare Threshold: 0% 00:25:48.550 Life Percentage Used:[2024-12-09 10:36:26.015911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.015915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.015921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.015932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1b80, cid 7, qid 0 00:25:48.550 [2024-12-09 10:36:26.016015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1b80) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016060] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:48.550 [2024-12-09 10:36:26.016070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1100) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.550 [2024-12-09 10:36:26.016080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1280) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.550 [2024-12-09 10:36:26.016088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1400) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.550 [2024-12-09 10:36:26.016096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.550 [2024-12-09 10:36:26.016107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016323] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:48.550 [2024-12-09 10:36:26.016327] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:48.550 [2024-12-09 10:36:26.016335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.016890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.016897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.016902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.016912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.016989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.016995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.016998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.017001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.550 [2024-12-09 10:36:26.017008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.017012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.550 [2024-12-09 10:36:26.017015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.550 [2024-12-09 10:36:26.017020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.550 [2024-12-09 10:36:26.017031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.550 [2024-12-09 10:36:26.017105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.550 [2024-12-09 10:36:26.017111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.550 [2024-12-09 10:36:26.017114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.017930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.017936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.017939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.017949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.017956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.017961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.017970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.018707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.018710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.018721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.018727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.018732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.018741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.018803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.022814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.022819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.022825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.022835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.022839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.022842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88f690) 00:25:48.551 [2024-12-09 10:36:26.022848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.551 [2024-12-09 10:36:26.022858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f1580, cid 3, qid 0 00:25:48.551 [2024-12-09 10:36:26.022986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:48.551 [2024-12-09 10:36:26.022992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:48.551 [2024-12-09 10:36:26.022995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:48.551 [2024-12-09 10:36:26.022998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f1580) on tqpair=0x88f690 00:25:48.551 [2024-12-09 10:36:26.023004] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:25:48.551 0% 00:25:48.551 Data Units Read: 0 00:25:48.551 Data Units Written: 0 00:25:48.551 Host Read Commands: 0 00:25:48.551 Host Write Commands: 0 00:25:48.551 Controller Busy Time: 0 minutes 00:25:48.551 Power Cycles: 0 00:25:48.551 Power On Hours: 0 hours 00:25:48.551 Unsafe Shutdowns: 0 00:25:48.551 Unrecoverable Media Errors: 0 00:25:48.552 Lifetime Error Log Entries: 0 00:25:48.552 Warning Temperature Time: 0 minutes 00:25:48.552 Critical Temperature Time: 0 minutes 00:25:48.552 00:25:48.552 Number of Queues 00:25:48.552 ================ 00:25:48.552 Number of I/O Submission Queues: 127 00:25:48.552 Number of I/O Completion Queues: 127 00:25:48.552 00:25:48.552 Active Namespaces 00:25:48.552 ================= 00:25:48.552 Namespace ID:1 00:25:48.552 Error Recovery Timeout: Unlimited 00:25:48.552 Command Set Identifier: NVM (00h) 00:25:48.552 Deallocate: Supported 00:25:48.552 Deallocated/Unwritten Error: Not Supported 00:25:48.552 Deallocated Read Value: Unknown 00:25:48.552 Deallocate in Write Zeroes: Not Supported 00:25:48.552 Deallocated Guard Field: 0xFFFF 00:25:48.552 Flush: Supported 00:25:48.552 Reservation: Supported 00:25:48.552 Namespace Sharing Capabilities: Multiple Controllers 00:25:48.552 Size (in LBAs): 131072 (0GiB) 00:25:48.552 Capacity (in LBAs): 131072 (0GiB) 00:25:48.552 Utilization (in LBAs): 131072 (0GiB) 00:25:48.552 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:48.552 EUI64: ABCDEF0123456789 00:25:48.552 UUID: 8dd6bd0c-f86c-43e4-9416-e677bc440aba 00:25:48.552 Thin Provisioning: Not Supported 00:25:48.552 Per-NS Atomic Units: Yes 00:25:48.552 Atomic Boundary Size (Normal): 0 00:25:48.552 Atomic Boundary Size (PFail): 0 00:25:48.552 Atomic Boundary Offset: 0 00:25:48.552 Maximum Single Source Range Length: 65535 00:25:48.552 Maximum Copy Length: 65535 00:25:48.552 Maximum Source Range Count: 1 00:25:48.552 NGUID/EUI64 Never Reused: No 00:25:48.552 Namespace Write Protected: No 00:25:48.552 Number of LBA Formats: 1 00:25:48.552 Current LBA Format: LBA Format #00 00:25:48.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:48.552 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.552 rmmod nvme_tcp 00:25:48.552 rmmod nvme_fabrics 00:25:48.552 rmmod nvme_keyring 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2742751 ']' 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2742751 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2742751 ']' 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2742751 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742751 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742751' 00:25:48.552 killing process with pid 2742751 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2742751 00:25:48.552 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2742751 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.810 10:36:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.741 00:25:50.741 real 0m9.444s 00:25:50.741 user 0m5.983s 00:25:50.741 sys 0m4.843s 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 ************************************ 00:25:50.741 END TEST nvmf_identify 00:25:50.741 ************************************ 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.741 10:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.999 ************************************ 00:25:50.999 START TEST nvmf_perf 00:25:50.999 ************************************ 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:50.999 * Looking for test storage... 00:25:50.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.999 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:50.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.999 --rc genhtml_branch_coverage=1 00:25:51.000 --rc genhtml_function_coverage=1 00:25:51.000 --rc genhtml_legend=1 00:25:51.000 --rc geninfo_all_blocks=1 00:25:51.000 --rc geninfo_unexecuted_blocks=1 00:25:51.000 00:25:51.000 ' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.000 --rc genhtml_branch_coverage=1 00:25:51.000 --rc genhtml_function_coverage=1 00:25:51.000 --rc genhtml_legend=1 00:25:51.000 --rc geninfo_all_blocks=1 00:25:51.000 --rc geninfo_unexecuted_blocks=1 00:25:51.000 00:25:51.000 ' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.000 --rc genhtml_branch_coverage=1 00:25:51.000 --rc genhtml_function_coverage=1 00:25:51.000 --rc genhtml_legend=1 00:25:51.000 --rc geninfo_all_blocks=1 00:25:51.000 --rc geninfo_unexecuted_blocks=1 00:25:51.000 00:25:51.000 ' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.000 --rc genhtml_branch_coverage=1 00:25:51.000 --rc genhtml_function_coverage=1 00:25:51.000 --rc genhtml_legend=1 00:25:51.000 --rc geninfo_all_blocks=1 00:25:51.000 --rc geninfo_unexecuted_blocks=1 00:25:51.000 00:25:51.000 ' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.000 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:57.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:57.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.572 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:57.573 Found net devices under 0000:86:00.0: cvl_0_0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:57.573 Found net devices under 0000:86:00.1: cvl_0_1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:25:57.573 00:25:57.573 --- 10.0.0.2 ping statistics --- 00:25:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.573 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:57.573 00:25:57.573 --- 10.0.0.1 ping statistics --- 00:25:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.573 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2746517 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2746517 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2746517 ']' 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.573 [2024-12-09 10:36:34.694916] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:57.573 [2024-12-09 10:36:34.694963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.573 [2024-12-09 10:36:34.773566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.573 [2024-12-09 10:36:34.816137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.573 [2024-12-09 10:36:34.816172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.573 [2024-12-09 10:36:34.816179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.573 [2024-12-09 10:36:34.816185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.573 [2024-12-09 10:36:34.816190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.573 [2024-12-09 10:36:34.817573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.573 [2024-12-09 10:36:34.817688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.573 [2024-12-09 10:36:34.817793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.573 [2024-12-09 10:36:34.817794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:57.573 10:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:00.844 10:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:00.844 10:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:00.844 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:26:00.844 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:00.844 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:00.844 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:26:00.844 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:00.845 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:00.845 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:01.101 [2024-12-09 10:36:38.603261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.101 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.358 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:01.359 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.359 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:01.359 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:01.615 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.873 [2024-12-09 10:36:39.391534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.873 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:02.129 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:26:02.129 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:02.129 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:02.129 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:03.499 Initializing NVMe Controllers 00:26:03.499 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:26:03.499 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:26:03.499 Initialization complete. Launching workers. 00:26:03.499 ======================================================== 00:26:03.499 Latency(us) 00:26:03.499 Device Information : IOPS MiB/s Average min max 00:26:03.499 PCIE (0000:5e:00.0) NSID 1 from core 0: 98024.22 382.91 325.83 38.51 4567.63 00:26:03.499 ======================================================== 00:26:03.499 Total : 98024.22 382.91 325.83 38.51 4567.63 00:26:03.499 00:26:03.499 10:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.432 Initializing NVMe Controllers 00:26:04.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:04.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:04.432 Initialization complete. Launching workers. 00:26:04.432 ======================================================== 00:26:04.432 Latency(us) 00:26:04.432 Device Information : IOPS MiB/s Average min max 00:26:04.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 244.00 0.95 4130.67 120.62 45332.46 00:26:04.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20733.21 7202.45 47915.54 00:26:04.432 ======================================================== 00:26:04.432 Total : 294.00 1.15 6954.23 120.62 47915.54 00:26:04.432 00:26:04.432 10:36:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:05.805 Initializing NVMe Controllers 00:26:05.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.805 Initialization complete. Launching workers. 00:26:05.805 ======================================================== 00:26:05.805 Latency(us) 00:26:05.805 Device Information : IOPS MiB/s Average min max 00:26:05.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11224.33 43.85 2850.07 460.16 7563.01 00:26:05.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.28 15.17 8250.03 6339.09 22133.14 00:26:05.805 ======================================================== 00:26:05.805 Total : 15107.61 59.01 4238.08 460.16 22133.14 00:26:05.805 00:26:05.805 10:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:05.805 10:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:05.805 10:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.330 Initializing NVMe Controllers 00:26:08.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.330 Controller IO queue size 128, less than required. 00:26:08.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.330 Controller IO queue size 128, less than required. 00:26:08.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:08.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:08.330 Initialization complete. Launching workers. 00:26:08.330 ======================================================== 00:26:08.330 Latency(us) 00:26:08.331 Device Information : IOPS MiB/s Average min max 00:26:08.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1813.97 453.49 71570.22 48471.34 122623.02 00:26:08.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.35 146.09 232808.73 80231.00 329665.59 00:26:08.331 ======================================================== 00:26:08.331 Total : 2398.32 599.58 110855.74 48471.34 329665.59 00:26:08.331 00:26:08.331 10:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:08.586 No valid NVMe controllers or AIO or URING devices found 00:26:08.586 Initializing NVMe Controllers 00:26:08.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.586 Controller IO queue size 128, less than required. 00:26:08.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.586 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:08.586 Controller IO queue size 128, less than required. 00:26:08.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.586 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:08.586 WARNING: Some requested NVMe devices were skipped 00:26:08.586 10:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:11.857 Initializing NVMe Controllers 00:26:11.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.857 Controller IO queue size 128, less than required. 00:26:11.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:11.857 Controller IO queue size 128, less than required. 00:26:11.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:11.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:11.857 Initialization complete. Launching workers. 00:26:11.857 00:26:11.857 ==================== 00:26:11.857 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:11.857 TCP transport: 00:26:11.857 polls: 15370 00:26:11.857 idle_polls: 12086 00:26:11.857 sock_completions: 3284 00:26:11.857 nvme_completions: 6109 00:26:11.857 submitted_requests: 9152 00:26:11.857 queued_requests: 1 00:26:11.857 00:26:11.857 ==================== 00:26:11.857 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:11.857 TCP transport: 00:26:11.857 polls: 15717 00:26:11.857 idle_polls: 11482 00:26:11.857 sock_completions: 4235 00:26:11.857 nvme_completions: 6695 00:26:11.857 submitted_requests: 9956 00:26:11.857 queued_requests: 1 00:26:11.857 ======================================================== 00:26:11.857 Latency(us) 00:26:11.857 Device Information : IOPS MiB/s Average min max 00:26:11.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1526.82 381.70 85150.39 49838.06 139920.22 00:26:11.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1673.30 418.33 77764.50 43127.58 131728.02 00:26:11.857 ======================================================== 00:26:11.857 Total : 3200.12 800.03 81288.41 43127.58 139920.22 00:26:11.857 00:26:11.857 10:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:11.857 10:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.857 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:11.857 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.858 rmmod nvme_tcp 00:26:11.858 rmmod nvme_fabrics 00:26:11.858 rmmod nvme_keyring 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2746517 ']' 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2746517 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2746517 ']' 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2746517 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2746517 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2746517' 00:26:11.858 killing process with pid 2746517 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2746517 00:26:11.858 10:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2746517 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.754 10:36:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.663 00:26:15.663 real 0m24.821s 00:26:15.663 user 1m5.192s 00:26:15.663 sys 0m8.380s 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:15.663 ************************************ 00:26:15.663 END TEST nvmf_perf 00:26:15.663 ************************************ 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.663 10:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.923 ************************************ 00:26:15.923 START TEST nvmf_fio_host 00:26:15.923 ************************************ 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:15.923 * Looking for test storage... 00:26:15.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.923 --rc genhtml_branch_coverage=1 00:26:15.923 --rc genhtml_function_coverage=1 00:26:15.923 --rc genhtml_legend=1 00:26:15.923 --rc geninfo_all_blocks=1 00:26:15.923 --rc geninfo_unexecuted_blocks=1 00:26:15.923 00:26:15.923 ' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.923 --rc genhtml_branch_coverage=1 00:26:15.923 --rc genhtml_function_coverage=1 00:26:15.923 --rc genhtml_legend=1 00:26:15.923 --rc geninfo_all_blocks=1 00:26:15.923 --rc geninfo_unexecuted_blocks=1 00:26:15.923 00:26:15.923 ' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.923 --rc genhtml_branch_coverage=1 00:26:15.923 --rc genhtml_function_coverage=1 00:26:15.923 --rc genhtml_legend=1 00:26:15.923 --rc geninfo_all_blocks=1 00:26:15.923 --rc geninfo_unexecuted_blocks=1 00:26:15.923 00:26:15.923 ' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.923 --rc genhtml_branch_coverage=1 00:26:15.923 --rc genhtml_function_coverage=1 00:26:15.923 --rc genhtml_legend=1 00:26:15.923 --rc geninfo_all_blocks=1 00:26:15.923 --rc geninfo_unexecuted_blocks=1 00:26:15.923 00:26:15.923 ' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.923 10:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:22.620 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:22.620 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.620 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:22.621 Found net devices under 0000:86:00.0: cvl_0_0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:22.621 Found net devices under 0000:86:00.1: cvl_0_1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:26:22.621 00:26:22.621 --- 10.0.0.2 ping statistics --- 00:26:22.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.621 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:22.621 00:26:22.621 --- 10.0.0.1 ping statistics --- 00:26:22.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.621 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2752633 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2752633 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2752633 ']' 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.621 [2024-12-09 10:36:59.555774] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:26:22.621 [2024-12-09 10:36:59.555821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.621 [2024-12-09 10:36:59.636557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.621 [2024-12-09 10:36:59.678539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.621 [2024-12-09 10:36:59.678576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.621 [2024-12-09 10:36:59.678583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.621 [2024-12-09 10:36:59.678588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.621 [2024-12-09 10:36:59.678593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.621 [2024-12-09 10:36:59.680051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.621 [2024-12-09 10:36:59.680155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.621 [2024-12-09 10:36:59.680260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.621 [2024-12-09 10:36:59.680261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:22.621 [2024-12-09 10:36:59.954500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:22.621 10:36:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.621 10:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:22.621 Malloc1 00:26:22.621 10:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.879 10:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:23.136 10:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.136 [2024-12-09 10:37:00.796818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.136 10:37:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:23.394 10:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.665 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:23.665 fio-3.35 00:26:23.665 Starting 1 thread 00:26:26.215 00:26:26.215 test: (groupid=0, jobs=1): err= 0: pid=2753185: Mon Dec 9 10:37:03 2024 00:26:26.215 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.2MiB/2005msec) 00:26:26.215 slat (nsec): min=1533, max=165880, avg=1681.17, stdev=1520.97 00:26:26.215 clat (usec): min=2114, max=10846, avg=5938.37, stdev=461.96 00:26:26.215 lat (usec): min=2138, max=10847, avg=5940.05, stdev=461.85 00:26:26.215 clat percentiles (usec): 00:26:26.215 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:26:26.215 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:26:26.215 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:26:26.215 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8356], 99.95th=[10028], 00:26:26.215 | 99.99th=[10290] 00:26:26.215 bw ( KiB/s): min=46736, max=48304, per=99.93%, avg=47580.00, stdev=699.88, samples=4 00:26:26.215 iops : min=11684, max=12076, avg=11895.00, stdev=174.97, samples=4 00:26:26.215 write: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec); 0 zone resets 00:26:26.215 slat (nsec): min=1569, max=161479, avg=1743.60, stdev=1147.84 00:26:26.215 clat (usec): min=1643, max=9641, avg=4808.51, stdev=378.29 00:26:26.215 lat (usec): min=1654, max=9643, avg=4810.25, stdev=378.22 00:26:26.215 clat percentiles (usec): 00:26:26.215 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:26:26.215 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:26:26.215 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:26:26.216 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7635], 99.95th=[ 8848], 00:26:26.216 | 99.99th=[ 9503] 00:26:26.216 bw ( KiB/s): min=47168, max=47680, per=100.00%, avg=47394.00, stdev=217.46, samples=4 00:26:26.216 iops : min=11792, max=11920, avg=11848.50, stdev=54.37, samples=4 00:26:26.216 lat (msec) : 2=0.02%, 4=0.66%, 10=99.29%, 20=0.03% 00:26:26.216 cpu : usr=74.75%, sys=24.30%, ctx=80, majf=0, minf=2 00:26:26.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:26.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:26.216 issued rwts: total=23865,23749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:26.216 00:26:26.216 Run status group 0 (all jobs): 00:26:26.216 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.2MiB (97.8MB), run=2005-2005msec 00:26:26.216 WRITE: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:26.216 10:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.474 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:26.474 fio-3.35 00:26:26.474 Starting 1 thread 00:26:28.995 00:26:28.995 test: (groupid=0, jobs=1): err= 0: pid=2753691: Mon Dec 9 10:37:06 2024 00:26:28.995 read: IOPS=10.6k, BW=165MiB/s (173MB/s)(331MiB/2007msec) 00:26:28.995 slat (nsec): min=2460, max=81920, avg=2825.47, stdev=1305.70 00:26:28.995 clat (usec): min=1810, max=49066, avg=7154.11, stdev=4487.23 00:26:28.995 lat (usec): min=1813, max=49069, avg=7156.94, stdev=4487.28 00:26:28.995 clat percentiles (usec): 00:26:28.995 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:26:28.995 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7111], 00:26:28.995 | 70.00th=[ 7373], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9503], 00:26:28.995 | 99.00th=[42730], 99.50th=[46400], 99.90th=[48497], 99.95th=[49021], 00:26:28.995 | 99.99th=[49021] 00:26:28.995 bw ( KiB/s): min=69792, max=93728, per=50.22%, avg=84792.00, stdev=11264.95, samples=4 00:26:28.995 iops : min= 4362, max= 5858, avg=5299.50, stdev=704.06, samples=4 00:26:28.995 write: IOPS=6430, BW=100MiB/s (105MB/s)(173MiB/1726msec); 0 zone resets 00:26:28.995 slat (usec): min=28, max=387, avg=31.81, stdev= 7.91 00:26:28.995 clat (usec): min=4780, max=15315, avg=8615.37, stdev=1469.59 00:26:28.995 lat (usec): min=4812, max=15432, avg=8647.18, stdev=1471.58 00:26:28.995 clat percentiles (usec): 00:26:28.995 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7439], 00:26:28.995 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:26:28.995 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[11338], 00:26:28.995 | 99.00th=[12649], 99.50th=[13435], 99.90th=[15008], 99.95th=[15139], 00:26:28.995 | 99.99th=[15139] 00:26:28.995 bw ( KiB/s): min=73792, max=97760, per=85.69%, avg=88160.00, stdev=11544.89, samples=4 00:26:28.995 iops : min= 4612, max= 6110, avg=5510.00, stdev=721.56, samples=4 00:26:28.995 lat (msec) : 2=0.02%, 4=1.68%, 10=90.70%, 20=6.82%, 50=0.79% 00:26:28.995 cpu : usr=85.09%, sys=14.21%, ctx=40, majf=0, minf=2 00:26:28.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:28.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:28.995 issued rwts: total=21177,11099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:28.995 00:26:28.995 Run status group 0 (all jobs): 00:26:28.995 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=331MiB (347MB), run=2007-2007msec 00:26:28.995 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=173MiB (182MB), run=1726-1726msec 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.995 rmmod nvme_tcp 00:26:28.995 rmmod nvme_fabrics 00:26:28.995 rmmod nvme_keyring 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2752633 ']' 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2752633 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2752633 ']' 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2752633 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.995 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2752633 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2752633' 00:26:29.254 killing process with pid 2752633 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2752633 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2752633 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.254 10:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.794 10:37:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.794 00:26:31.794 real 0m15.590s 00:26:31.794 user 0m45.793s 00:26:31.794 sys 0m6.367s 00:26:31.794 10:37:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.794 10:37:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.794 ************************************ 00:26:31.794 END TEST nvmf_fio_host 00:26:31.794 ************************************ 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.794 ************************************ 00:26:31.794 START TEST nvmf_failover 00:26:31.794 ************************************ 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:31.794 * Looking for test storage... 00:26:31.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:31.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.794 --rc genhtml_branch_coverage=1 00:26:31.794 --rc genhtml_function_coverage=1 00:26:31.794 --rc genhtml_legend=1 00:26:31.794 --rc geninfo_all_blocks=1 00:26:31.794 --rc geninfo_unexecuted_blocks=1 00:26:31.794 00:26:31.794 ' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:31.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.794 --rc genhtml_branch_coverage=1 00:26:31.794 --rc genhtml_function_coverage=1 00:26:31.794 --rc genhtml_legend=1 00:26:31.794 --rc geninfo_all_blocks=1 00:26:31.794 --rc geninfo_unexecuted_blocks=1 00:26:31.794 00:26:31.794 ' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:31.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.794 --rc genhtml_branch_coverage=1 00:26:31.794 --rc genhtml_function_coverage=1 00:26:31.794 --rc genhtml_legend=1 00:26:31.794 --rc geninfo_all_blocks=1 00:26:31.794 --rc geninfo_unexecuted_blocks=1 00:26:31.794 00:26:31.794 ' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:31.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.794 --rc genhtml_branch_coverage=1 00:26:31.794 --rc genhtml_function_coverage=1 00:26:31.794 --rc genhtml_legend=1 00:26:31.794 --rc geninfo_all_blocks=1 00:26:31.794 --rc geninfo_unexecuted_blocks=1 00:26:31.794 00:26:31.794 ' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.794 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:38.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:38.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:38.366 Found net devices under 0000:86:00.0: cvl_0_0 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:38.366 Found net devices under 0000:86:00.1: cvl_0_1 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.366 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:38.366 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:38.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:26:38.366 00:26:38.366 --- 10.0.0.2 ping statistics --- 00:26:38.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.367 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:38.367 00:26:38.367 --- 10.0.0.1 ping statistics --- 00:26:38.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.367 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2757552 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2757552 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2757552 ']' 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.367 [2024-12-09 10:37:15.249607] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:26:38.367 [2024-12-09 10:37:15.249648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.367 [2024-12-09 10:37:15.330016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:38.367 [2024-12-09 10:37:15.371180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.367 [2024-12-09 10:37:15.371218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.367 [2024-12-09 10:37:15.371224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.367 [2024-12-09 10:37:15.371230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.367 [2024-12-09 10:37:15.371235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.367 [2024-12-09 10:37:15.372635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.367 [2024-12-09 10:37:15.372740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.367 [2024-12-09 10:37:15.372740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:38.367 [2024-12-09 10:37:15.677277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:38.367 Malloc0 00:26:38.367 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:38.625 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:38.625 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.882 [2024-12-09 10:37:16.481050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.882 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.140 [2024-12-09 10:37:16.677601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.140 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:39.398 [2024-12-09 10:37:16.874253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2757814 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2757814 /var/tmp/bdevperf.sock 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2757814 ']' 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:39.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.398 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.655 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.655 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:39.655 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:39.913 NVMe0n1 00:26:39.913 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:40.479 00:26:40.479 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2758042 00:26:40.479 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:40.479 10:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:41.412 10:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.412 [2024-12-09 10:37:19.098561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 [2024-12-09 10:37:19.098639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810e20 is same with the state(6) to be set 00:26:41.412 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:44.690 10:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:44.948 00:26:44.948 10:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:45.205 [2024-12-09 10:37:22.779216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.205 [2024-12-09 10:37:22.779498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.206 [2024-12-09 10:37:22.779504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.206 [2024-12-09 10:37:22.779510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.206 [2024-12-09 10:37:22.779516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.206 [2024-12-09 10:37:22.779522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811c40 is same with the state(6) to be set 00:26:45.206 10:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:48.475 10:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.475 [2024-12-09 10:37:26.002031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.475 10:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:49.403 10:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:49.660 [2024-12-09 10:37:27.217956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.217998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.660 [2024-12-09 10:37:27.218058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 [2024-12-09 10:37:27.218216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812800 is same with the state(6) to be set 00:26:49.661 10:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2758042 00:26:56.221 { 00:26:56.221 "results": [ 00:26:56.221 { 00:26:56.221 "job": "NVMe0n1", 00:26:56.221 "core_mask": "0x1", 00:26:56.221 "workload": "verify", 00:26:56.221 "status": "finished", 00:26:56.221 "verify_range": { 00:26:56.221 "start": 0, 00:26:56.221 "length": 16384 00:26:56.221 }, 00:26:56.221 "queue_depth": 128, 00:26:56.221 "io_size": 4096, 00:26:56.221 "runtime": 15.049498, 00:26:56.221 "iops": 11304.696010458289, 00:26:56.221 "mibps": 44.15896879085269, 00:26:56.221 "io_failed": 4213, 00:26:56.221 "io_timeout": 0, 00:26:56.221 "avg_latency_us": 10997.7622487035, 00:26:56.221 "min_latency_us": 421.30285714285714, 00:26:56.221 "max_latency_us": 43940.32761904762 00:26:56.221 } 00:26:56.221 ], 00:26:56.221 "core_count": 1 00:26:56.221 } 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2757814 ']' 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757814' 00:26:56.221 killing process with pid 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2757814 00:26:56.221 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:56.221 [2024-12-09 10:37:16.933725] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:26:56.221 [2024-12-09 10:37:16.933777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757814 ] 00:26:56.221 [2024-12-09 10:37:17.009960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.221 [2024-12-09 10:37:17.051437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.221 Running I/O for 15 seconds... 00:26:56.221 11308.00 IOPS, 44.17 MiB/s [2024-12-09T09:37:33.945Z] [2024-12-09 10:37:19.101272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.221 [2024-12-09 10:37:19.101436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.221 [2024-12-09 10:37:19.101664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.221 [2024-12-09 10:37:19.101672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.101988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.101994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.222 [2024-12-09 10:37:19.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.222 [2024-12-09 10:37:19.102183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100112 len:8 PRP1 0x0 PRP2 0x0 00:26:56.222 [2024-12-09 10:37:19.102189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.222 [2024-12-09 10:37:19.102203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.222 [2024-12-09 10:37:19.102209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:26:56.222 [2024-12-09 10:37:19.102215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.222 [2024-12-09 10:37:19.102227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.222 [2024-12-09 10:37:19.102232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 PRP1 0x0 PRP2 0x0 00:26:56.222 [2024-12-09 10:37:19.102238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.222 [2024-12-09 10:37:19.102251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.222 [2024-12-09 10:37:19.102256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:26:56.222 [2024-12-09 10:37:19.102262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.222 [2024-12-09 10:37:19.102268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.222 [2024-12-09 10:37:19.102274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100144 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100152 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-12-09 10:37:19.102797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.223 [2024-12-09 10:37:19.102804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.223 [2024-12-09 10:37:19.102812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.223 [2024-12-09 10:37:19.102817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.102982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.102988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.102995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.102999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.103310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.103317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.103324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.103329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.114283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.114294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.224 [2024-12-09 10:37:19.114301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.224 [2024-12-09 10:37:19.114307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.224 [2024-12-09 10:37:19.114313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-12-09 10:37:19.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-12-09 10:37:19.114753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-12-09 10:37:19.114759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.225 [2024-12-09 10:37:19.114764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.225 [2024-12-09 10:37:19.114769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:26:56.226 [2024-12-09 10:37:19.114775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:19.114820] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:56.226 [2024-12-09 10:37:19.114844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.226 [2024-12-09 10:37:19.114851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:19.114859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.226 [2024-12-09 10:37:19.114867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:19.114874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.226 [2024-12-09 10:37:19.114880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:19.114888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.226 [2024-12-09 10:37:19.114894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:19.114900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:56.226 [2024-12-09 10:37:19.114940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x810fa0 (9): Bad file descriptor 00:26:56.226 [2024-12-09 10:37:19.117713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:56.226 [2024-12-09 10:37:19.146850] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:56.226 11104.00 IOPS, 43.38 MiB/s [2024-12-09T09:37:33.950Z] 11203.00 IOPS, 43.76 MiB/s [2024-12-09T09:37:33.950Z] 11263.50 IOPS, 44.00 MiB/s [2024-12-09T09:37:33.950Z] [2024-12-09 10:37:22.780446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.226 [2024-12-09 10:37:22.780950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.226 [2024-12-09 10:37:22.780957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.780977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.780986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.780994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.227 [2024-12-09 10:37:22.781087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.227 [2024-12-09 10:37:22.781545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.227 [2024-12-09 10:37:22.781553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.228 [2024-12-09 10:37:22.781774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.781989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.781995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.782009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.782023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.782037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.782051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.228 [2024-12-09 10:37:22.782064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.228 [2024-12-09 10:37:22.782089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50000 len:8 PRP1 0x0 PRP2 0x0 00:26:56.228 [2024-12-09 10:37:22.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.228 [2024-12-09 10:37:22.782133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.228 [2024-12-09 10:37:22.782148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.228 [2024-12-09 10:37:22.782155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.228 [2024-12-09 10:37:22.782161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.229 [2024-12-09 10:37:22.782175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x810fa0 is same with the state(6) to be set 00:26:56.229 [2024-12-09 10:37:22.782366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50008 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50016 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50024 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49192 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49200 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49208 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49216 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49224 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49232 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49240 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49248 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49256 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.782650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.782655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.782662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49264 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.782668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.793934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.793942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49272 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.793956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.793962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.793968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.793973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49280 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.793979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.793986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.793991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.793996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49288 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49296 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49320 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.229 [2024-12-09 10:37:22.794137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:26:56.229 [2024-12-09 10:37:22.794143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.229 [2024-12-09 10:37:22.794150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.229 [2024-12-09 10:37:22.794154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49344 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49352 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49400 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49424 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49440 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49448 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49456 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49464 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49472 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49480 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49488 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49496 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.230 [2024-12-09 10:37:22.794617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.230 [2024-12-09 10:37:22.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49504 len:8 PRP1 0x0 PRP2 0x0 00:26:56.230 [2024-12-09 10:37:22.794628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.230 [2024-12-09 10:37:22.794635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49512 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49520 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49528 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49544 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49560 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49568 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49576 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49584 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49008 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49016 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49024 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49032 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.794977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49040 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.794991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.794996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49048 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49056 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49592 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49600 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49608 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49616 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49624 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.231 [2024-12-09 10:37:22.795161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49632 len:8 PRP1 0x0 PRP2 0x0 00:26:56.231 [2024-12-09 10:37:22.795168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.231 [2024-12-09 10:37:22.795174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.231 [2024-12-09 10:37:22.795179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.795184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49640 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.795192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.795198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.795203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.795208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49648 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.795214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.795220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.795225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.795231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49656 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.795237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.795243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.795248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.795253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49664 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.795259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.795265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.795270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.795275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49672 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.795281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.795287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.795292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.802898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49680 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.802911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.802922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.802928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.802935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49688 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.802943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.802952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.802959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.802965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49696 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.802974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.802983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.802992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.802999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49728 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49736 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49744 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49760 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49776 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49784 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49792 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49800 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49808 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49816 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.232 [2024-12-09 10:37:22.803445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.232 [2024-12-09 10:37:22.803452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.232 [2024-12-09 10:37:22.803459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49824 len:8 PRP1 0x0 PRP2 0x0 00:26:56.232 [2024-12-09 10:37:22.803472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49832 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49064 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49072 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49080 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49088 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49096 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49104 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49112 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49120 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49128 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49136 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49144 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49160 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49168 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49176 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.803975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.803982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.803989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49184 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.803997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49840 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49848 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49856 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49872 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.233 [2024-12-09 10:37:22.804167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.233 [2024-12-09 10:37:22.804174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49880 len:8 PRP1 0x0 PRP2 0x0 00:26:56.233 [2024-12-09 10:37:22.804182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.233 [2024-12-09 10:37:22.804192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49888 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49912 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49920 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49928 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49936 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49944 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49952 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49960 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49968 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49976 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49984 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49992 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.234 [2024-12-09 10:37:22.804628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.234 [2024-12-09 10:37:22.804635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50000 len:8 PRP1 0x0 PRP2 0x0 00:26:56.234 [2024-12-09 10:37:22.804643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:22.804691] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:56.234 [2024-12-09 10:37:22.804704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:56.234 [2024-12-09 10:37:22.804740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x810fa0 (9): Bad file descriptor 00:26:56.234 [2024-12-09 10:37:22.809181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:56.234 [2024-12-09 10:37:22.836578] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:56.234 11167.80 IOPS, 43.62 MiB/s [2024-12-09T09:37:33.958Z] 11225.00 IOPS, 43.85 MiB/s [2024-12-09T09:37:33.958Z] 11253.43 IOPS, 43.96 MiB/s [2024-12-09T09:37:33.958Z] 11270.75 IOPS, 44.03 MiB/s [2024-12-09T09:37:33.958Z] 11279.33 IOPS, 44.06 MiB/s [2024-12-09T09:37:33.958Z] [2024-12-09 10:37:27.218562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-12-09 10:37:27.218693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.234 [2024-12-09 10:37:27.218701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.218990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.218996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.219011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.219025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.219039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.219053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.235 [2024-12-09 10:37:27.219067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.235 [2024-12-09 10:37:27.219468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.235 [2024-12-09 10:37:27.219476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.219989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.219996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.220002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.220010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.220017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.220026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.220033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.220042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.236 [2024-12-09 10:37:27.220049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.236 [2024-12-09 10:37:27.220057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.237 [2024-12-09 10:37:27.220398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66448 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66456 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66464 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66472 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66480 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66488 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66496 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66504 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66512 len:8 PRP1 0x0 PRP2 0x0 00:26:56.237 [2024-12-09 10:37:27.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.237 [2024-12-09 10:37:27.220639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.237 [2024-12-09 10:37:27.220645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.237 [2024-12-09 10:37:27.220650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66520 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.220656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.220663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.220668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.220674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66528 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.220680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.220688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.220693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.220698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66536 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.220704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.220711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.220716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.220721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66544 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.220728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.220734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.220739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.220744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66552 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.220757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.220762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.220767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66560 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.232078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.232096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.232102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66568 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.232108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:56.238 [2024-12-09 10:37:27.232121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:56.238 [2024-12-09 10:37:27.232127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:26:56.238 [2024-12-09 10:37:27.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232176] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:56.238 [2024-12-09 10:37:27.232199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.238 [2024-12-09 10:37:27.232206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.238 [2024-12-09 10:37:27.232220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.238 [2024-12-09 10:37:27.232234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.238 [2024-12-09 10:37:27.232248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.238 [2024-12-09 10:37:27.232254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:56.238 [2024-12-09 10:37:27.232283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x810fa0 (9): Bad file descriptor 00:26:56.238 [2024-12-09 10:37:27.235365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:56.238 [2024-12-09 10:37:27.265483] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:56.238 11250.10 IOPS, 43.95 MiB/s [2024-12-09T09:37:33.962Z] 11274.09 IOPS, 44.04 MiB/s [2024-12-09T09:37:33.962Z] 11293.08 IOPS, 44.11 MiB/s [2024-12-09T09:37:33.962Z] 11299.92 IOPS, 44.14 MiB/s [2024-12-09T09:37:33.962Z] 11313.79 IOPS, 44.19 MiB/s [2024-12-09T09:37:33.962Z] 11333.47 IOPS, 44.27 MiB/s 00:26:56.238 Latency(us) 00:26:56.238 [2024-12-09T09:37:33.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.238 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.238 Verification LBA range: start 0x0 length 0x4000 00:26:56.238 NVMe0n1 : 15.05 11304.70 44.16 279.94 0.00 10997.76 421.30 43940.33 00:26:56.238 [2024-12-09T09:37:33.962Z] =================================================================================================================== 00:26:56.238 [2024-12-09T09:37:33.962Z] Total : 11304.70 44.16 279.94 0.00 10997.76 421.30 43940.33 00:26:56.238 Received shutdown signal, test time was about 15.000000 seconds 00:26:56.238 00:26:56.238 Latency(us) 00:26:56.238 [2024-12-09T09:37:33.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.238 [2024-12-09T09:37:33.962Z] =================================================================================================================== 00:26:56.238 [2024-12-09T09:37:33.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2760562 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2760562 /var/tmp/bdevperf.sock 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2760562 ']' 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:56.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:56.238 [2024-12-09 10:37:33.745333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:56.238 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:56.495 [2024-12-09 10:37:33.933896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:56.495 10:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:56.751 NVMe0n1 00:26:56.751 10:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:57.314 00:26:57.314 10:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:57.569 00:26:57.569 10:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:57.569 10:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:57.825 10:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.080 10:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:01.371 10:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:01.371 10:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:01.371 10:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2761480 00:27:01.371 10:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:01.371 10:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2761480 00:27:02.301 { 00:27:02.301 "results": [ 00:27:02.301 { 00:27:02.301 "job": "NVMe0n1", 00:27:02.301 "core_mask": "0x1", 00:27:02.301 "workload": "verify", 00:27:02.301 "status": "finished", 00:27:02.301 "verify_range": { 00:27:02.301 "start": 0, 00:27:02.301 "length": 16384 00:27:02.301 }, 00:27:02.301 "queue_depth": 128, 00:27:02.301 "io_size": 4096, 00:27:02.301 "runtime": 1.006224, 00:27:02.301 "iops": 11277.806929669736, 00:27:02.301 "mibps": 44.053933319022406, 00:27:02.301 "io_failed": 0, 00:27:02.301 "io_timeout": 0, 00:27:02.301 "avg_latency_us": 11306.27074475049, 00:27:02.301 "min_latency_us": 998.6438095238095, 00:27:02.301 "max_latency_us": 11484.40380952381 00:27:02.301 } 00:27:02.301 ], 00:27:02.301 "core_count": 1 00:27:02.301 } 00:27:02.301 10:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:02.301 [2024-12-09 10:37:33.380945] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:27:02.301 [2024-12-09 10:37:33.380999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760562 ] 00:27:02.301 [2024-12-09 10:37:33.454572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.301 [2024-12-09 10:37:33.492339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.301 [2024-12-09 10:37:35.575089] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:02.302 [2024-12-09 10:37:35.575135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.302 [2024-12-09 10:37:35.575147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.302 [2024-12-09 10:37:35.575156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.302 [2024-12-09 10:37:35.575164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.302 [2024-12-09 10:37:35.575172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.302 [2024-12-09 10:37:35.575179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.302 [2024-12-09 10:37:35.575186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.302 [2024-12-09 10:37:35.575194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.302 [2024-12-09 10:37:35.575202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:02.302 [2024-12-09 10:37:35.575228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:02.302 [2024-12-09 10:37:35.575243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141fa0 (9): Bad file descriptor 00:27:02.302 [2024-12-09 10:37:35.619758] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:02.302 Running I/O for 1 seconds... 00:27:02.302 11220.00 IOPS, 43.83 MiB/s 00:27:02.302 Latency(us) 00:27:02.302 [2024-12-09T09:37:40.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.302 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.302 Verification LBA range: start 0x0 length 0x4000 00:27:02.302 NVMe0n1 : 1.01 11277.81 44.05 0.00 0.00 11306.27 998.64 11484.40 00:27:02.302 [2024-12-09T09:37:40.026Z] =================================================================================================================== 00:27:02.302 [2024-12-09T09:37:40.026Z] Total : 11277.81 44.05 0.00 0.00 11306.27 998.64 11484.40 00:27:02.302 10:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.302 10:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:02.570 10:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:02.827 10:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.827 10:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:03.085 10:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:03.085 10:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:06.357 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2760562 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2760562 ']' 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2760562 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.358 10:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760562 00:27:06.358 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:06.358 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:06.358 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760562' 00:27:06.358 killing process with pid 2760562 00:27:06.358 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2760562 00:27:06.358 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2760562 00:27:06.614 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:06.614 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:06.872 rmmod nvme_tcp 00:27:06.872 rmmod nvme_fabrics 00:27:06.872 rmmod nvme_keyring 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2757552 ']' 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2757552 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2757552 ']' 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2757552 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757552 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757552' 00:27:06.872 killing process with pid 2757552 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2757552 00:27:06.872 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2757552 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.130 10:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.029 10:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.029 00:27:09.029 real 0m37.703s 00:27:09.029 user 1m59.532s 00:27:09.029 sys 0m7.931s 00:27:09.029 10:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.029 10:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:09.029 ************************************ 00:27:09.029 END TEST nvmf_failover 00:27:09.029 ************************************ 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.289 ************************************ 00:27:09.289 START TEST nvmf_host_discovery 00:27:09.289 ************************************ 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:09.289 * Looking for test storage... 00:27:09.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:09.289 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.289 --rc genhtml_branch_coverage=1 00:27:09.289 --rc genhtml_function_coverage=1 00:27:09.289 --rc genhtml_legend=1 00:27:09.289 --rc geninfo_all_blocks=1 00:27:09.289 --rc geninfo_unexecuted_blocks=1 00:27:09.289 00:27:09.289 ' 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.289 --rc genhtml_branch_coverage=1 00:27:09.289 --rc genhtml_function_coverage=1 00:27:09.289 --rc genhtml_legend=1 00:27:09.289 --rc geninfo_all_blocks=1 00:27:09.289 --rc geninfo_unexecuted_blocks=1 00:27:09.289 00:27:09.289 ' 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.289 --rc genhtml_branch_coverage=1 00:27:09.289 --rc genhtml_function_coverage=1 00:27:09.289 --rc genhtml_legend=1 00:27:09.289 --rc geninfo_all_blocks=1 00:27:09.289 --rc geninfo_unexecuted_blocks=1 00:27:09.289 00:27:09.289 ' 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.289 --rc genhtml_branch_coverage=1 00:27:09.289 --rc genhtml_function_coverage=1 00:27:09.289 --rc genhtml_legend=1 00:27:09.289 --rc geninfo_all_blocks=1 00:27:09.289 --rc geninfo_unexecuted_blocks=1 00:27:09.289 00:27:09.289 ' 00:27:09.289 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.549 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.118 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.119 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.119 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:27:16.119 00:27:16.119 --- 10.0.0.2 ping statistics --- 00:27:16.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.119 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:16.119 00:27:16.119 --- 10.0.0.1 ping statistics --- 00:27:16.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.119 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2765824 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2765824 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2765824 ']' 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.119 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.120 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.120 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 [2024-12-09 10:37:53.018382] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:27:16.120 [2024-12-09 10:37:53.018431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.120 [2024-12-09 10:37:53.097132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.120 [2024-12-09 10:37:53.135795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.120 [2024-12-09 10:37:53.135837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.120 [2024-12-09 10:37:53.135844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.120 [2024-12-09 10:37:53.135849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.120 [2024-12-09 10:37:53.135854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.120 [2024-12-09 10:37:53.136447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 [2024-12-09 10:37:53.279618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 [2024-12-09 10:37:53.291803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 null0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 null1 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2765949 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2765949 /tmp/host.sock 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2765949 ']' 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:16.120 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 [2024-12-09 10:37:53.368725] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:27:16.120 [2024-12-09 10:37:53.368764] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765949 ] 00:27:16.120 [2024-12-09 10:37:53.445257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.120 [2024-12-09 10:37:53.487221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.120 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.121 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 [2024-12-09 10:37:53.905401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:16.377 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:16.378 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.633 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:16.634 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:16.890 [2024-12-09 10:37:54.607329] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.890 [2024-12-09 10:37:54.607349] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.890 [2024-12-09 10:37:54.607360] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.146 [2024-12-09 10:37:54.695625] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:17.421 [2024-12-09 10:37:54.880666] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:17.421 [2024-12-09 10:37:54.881470] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf74920:1 started. 00:27:17.421 [2024-12-09 10:37:54.882868] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:17.421 [2024-12-09 10:37:54.882883] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:17.421 [2024-12-09 10:37:54.886520] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf74920 was disconnected and freed. delete nvme_qpair. 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.421 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 [2024-12-09 10:37:55.282912] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf74ca0:1 started. 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 [2024-12-09 10:37:55.328294] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf74ca0 was disconnected and freed. delete nvme_qpair. 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 [2024-12-09 10:37:55.385289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:17.726 [2024-12-09 10:37:55.386218] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:17.726 [2024-12-09 10:37:55.386237] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:17.726 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:18.001 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.002 [2024-12-09 10:37:55.472492] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:18.002 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:18.002 [2024-12-09 10:37:55.653435] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:18.002 [2024-12-09 10:37:55.653469] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:18.002 [2024-12-09 10:37:55.653477] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:18.002 [2024-12-09 10:37:55.653482] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.942 [2024-12-09 10:37:56.621046] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:18.942 [2024-12-09 10:37:56.621068] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:18.942 [2024-12-09 10:37:56.625057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.942 [2024-12-09 10:37:56.625074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.942 [2024-12-09 10:37:56.625083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.942 [2024-12-09 10:37:56.625090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.942 [2024-12-09 10:37:56.625098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.942 [2024-12-09 10:37:56.625104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.942 [2024-12-09 10:37:56.625111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.942 [2024-12-09 10:37:56.625118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.942 [2024-12-09 10:37:56.625127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.942 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:18.942 [2024-12-09 10:37:56.635063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:18.942 [2024-12-09 10:37:56.645098] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:18.942 [2024-12-09 10:37:56.645110] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:18.942 [2024-12-09 10:37:56.645116] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:18.942 [2024-12-09 10:37:56.645120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:18.942 [2024-12-09 10:37:56.645136] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:18.942 [2024-12-09 10:37:56.645335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.942 [2024-12-09 10:37:56.645349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:18.942 [2024-12-09 10:37:56.645356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:18.942 [2024-12-09 10:37:56.645375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:18.943 [2024-12-09 10:37:56.645386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:18.943 [2024-12-09 10:37:56.645393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:18.943 [2024-12-09 10:37:56.645400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:18.943 [2024-12-09 10:37:56.645406] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:18.943 [2024-12-09 10:37:56.645411] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:18.943 [2024-12-09 10:37:56.645415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:18.943 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.943 [2024-12-09 10:37:56.655166] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:18.943 [2024-12-09 10:37:56.655177] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:18.943 [2024-12-09 10:37:56.655180] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:18.943 [2024-12-09 10:37:56.655188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:18.943 [2024-12-09 10:37:56.655201] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:18.943 [2024-12-09 10:37:56.655309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.943 [2024-12-09 10:37:56.655321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:18.943 [2024-12-09 10:37:56.655328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:18.943 [2024-12-09 10:37:56.655338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:18.943 [2024-12-09 10:37:56.655347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:18.943 [2024-12-09 10:37:56.655353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:18.943 [2024-12-09 10:37:56.655359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:18.943 [2024-12-09 10:37:56.655364] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:18.943 [2024-12-09 10:37:56.655368] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:18.943 [2024-12-09 10:37:56.655372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.201 [2024-12-09 10:37:56.665251] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:19.201 [2024-12-09 10:37:56.665270] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:19.201 [2024-12-09 10:37:56.665275] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:19.201 [2024-12-09 10:37:56.665279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.201 [2024-12-09 10:37:56.665298] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:19.201 [2024-12-09 10:37:56.665456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.201 [2024-12-09 10:37:56.665470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:19.201 [2024-12-09 10:37:56.665478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:19.201 [2024-12-09 10:37:56.665490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:19.201 [2024-12-09 10:37:56.665500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:19.201 [2024-12-09 10:37:56.665507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:19.201 [2024-12-09 10:37:56.665514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:19.201 [2024-12-09 10:37:56.665520] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:19.201 [2024-12-09 10:37:56.665524] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:19.201 [2024-12-09 10:37:56.665529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:19.201 [2024-12-09 10:37:56.675329] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:19.201 [2024-12-09 10:37:56.675347] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:19.201 [2024-12-09 10:37:56.675353] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:19.201 [2024-12-09 10:37:56.675357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.201 [2024-12-09 10:37:56.675372] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:19.201 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:19.201 [2024-12-09 10:37:56.675577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.202 [2024-12-09 10:37:56.675591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:19.202 [2024-12-09 10:37:56.675598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:19.202 [2024-12-09 10:37:56.675609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:19.202 [2024-12-09 10:37:56.675619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:19.202 [2024-12-09 10:37:56.675626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:19.202 [2024-12-09 10:37:56.675632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:19.202 [2024-12-09 10:37:56.675638] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:19.202 [2024-12-09 10:37:56.675642] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:19.202 [2024-12-09 10:37:56.675646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:19.202 [2024-12-09 10:37:56.685403] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:19.202 [2024-12-09 10:37:56.685416] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:19.202 [2024-12-09 10:37:56.685421] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.685425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.202 [2024-12-09 10:37:56.685438] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.685714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.202 [2024-12-09 10:37:56.685726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:19.202 [2024-12-09 10:37:56.685734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:19.202 [2024-12-09 10:37:56.685744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:19.202 [2024-12-09 10:37:56.685754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:19.202 [2024-12-09 10:37:56.685760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:19.202 [2024-12-09 10:37:56.685767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:19.202 [2024-12-09 10:37:56.685772] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:19.202 [2024-12-09 10:37:56.685776] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:19.202 [2024-12-09 10:37:56.685780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.202 [2024-12-09 10:37:56.695469] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:19.202 [2024-12-09 10:37:56.695479] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:19.202 [2024-12-09 10:37:56.695484] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.695487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.202 [2024-12-09 10:37:56.695500] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.695675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.202 [2024-12-09 10:37:56.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:19.202 [2024-12-09 10:37:56.695700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:19.202 [2024-12-09 10:37:56.695710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:19.202 [2024-12-09 10:37:56.695720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:19.202 [2024-12-09 10:37:56.695726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:19.202 [2024-12-09 10:37:56.695732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:19.202 [2024-12-09 10:37:56.695737] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:19.202 [2024-12-09 10:37:56.695742] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:19.202 [2024-12-09 10:37:56.695746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.202 [2024-12-09 10:37:56.705530] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:19.202 [2024-12-09 10:37:56.705540] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:19.202 [2024-12-09 10:37:56.705543] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.705547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:19.202 [2024-12-09 10:37:56.705563] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:19.202 [2024-12-09 10:37:56.705800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.202 [2024-12-09 10:37:56.705817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf46930 with addr=10.0.0.2, port=4420 00:27:19.202 [2024-12-09 10:37:56.705824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46930 is same with the state(6) to be set 00:27:19.202 [2024-12-09 10:37:56.705834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf46930 (9): Bad file descriptor 00:27:19.202 [2024-12-09 10:37:56.705843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:19.202 [2024-12-09 10:37:56.705849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:19.202 [2024-12-09 10:37:56.705855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:19.202 [2024-12-09 10:37:56.705861] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:19.202 [2024-12-09 10:37:56.705865] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:19.202 [2024-12-09 10:37:56.705869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:19.202 [2024-12-09 10:37:56.707356] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:19.202 [2024-12-09 10:37:56.707371] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:19.202 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.203 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.460 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:19.460 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.461 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.390 [2024-12-09 10:37:58.026996] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:20.390 [2024-12-09 10:37:58.027018] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:20.390 [2024-12-09 10:37:58.027030] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:20.647 [2024-12-09 10:37:58.115286] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:20.904 [2024-12-09 10:37:58.427660] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:20.904 [2024-12-09 10:37:58.428267] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xf7a970:1 started. 00:27:20.904 [2024-12-09 10:37:58.429901] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:20.904 [2024-12-09 10:37:58.429932] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:20.904 [2024-12-09 10:37:58.435973] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xf7a970 was disconnected and freed. delete nvme_qpair. 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.904 request: 00:27:20.904 { 00:27:20.904 "name": "nvme", 00:27:20.904 "trtype": "tcp", 00:27:20.904 "traddr": "10.0.0.2", 00:27:20.904 "adrfam": "ipv4", 00:27:20.904 "trsvcid": "8009", 00:27:20.904 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:20.904 "wait_for_attach": true, 00:27:20.904 "method": "bdev_nvme_start_discovery", 00:27:20.904 "req_id": 1 00:27:20.904 } 00:27:20.904 Got JSON-RPC error response 00:27:20.904 response: 00:27:20.904 { 00:27:20.904 "code": -17, 00:27:20.904 "message": "File exists" 00:27:20.904 } 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.904 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.904 request: 00:27:20.904 { 00:27:20.904 "name": "nvme_second", 00:27:20.904 "trtype": "tcp", 00:27:20.904 "traddr": "10.0.0.2", 00:27:20.904 "adrfam": "ipv4", 00:27:20.905 "trsvcid": "8009", 00:27:20.905 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:20.905 "wait_for_attach": true, 00:27:20.905 "method": "bdev_nvme_start_discovery", 00:27:20.905 "req_id": 1 00:27:20.905 } 00:27:20.905 Got JSON-RPC error response 00:27:20.905 response: 00:27:20.905 { 00:27:20.905 "code": -17, 00:27:20.905 "message": "File exists" 00:27:20.905 } 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.905 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.162 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.090 [2024-12-09 10:37:59.678694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.090 [2024-12-09 10:37:59.678722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5ccb0 with addr=10.0.0.2, port=8010 00:27:22.090 [2024-12-09 10:37:59.678737] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:22.090 [2024-12-09 10:37:59.678744] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:22.090 [2024-12-09 10:37:59.678750] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:23.020 [2024-12-09 10:38:00.681121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.020 [2024-12-09 10:38:00.681154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5ccb0 with addr=10.0.0.2, port=8010 00:27:23.020 [2024-12-09 10:38:00.681172] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:23.020 [2024-12-09 10:38:00.681179] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:23.020 [2024-12-09 10:38:00.681186] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:24.389 [2024-12-09 10:38:01.683289] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:24.389 request: 00:27:24.389 { 00:27:24.389 "name": "nvme_second", 00:27:24.389 "trtype": "tcp", 00:27:24.389 "traddr": "10.0.0.2", 00:27:24.389 "adrfam": "ipv4", 00:27:24.389 "trsvcid": "8010", 00:27:24.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:24.389 "wait_for_attach": false, 00:27:24.389 "attach_timeout_ms": 3000, 00:27:24.389 "method": "bdev_nvme_start_discovery", 00:27:24.389 "req_id": 1 00:27:24.389 } 00:27:24.389 Got JSON-RPC error response 00:27:24.389 response: 00:27:24.389 { 00:27:24.389 "code": -110, 00:27:24.389 "message": "Connection timed out" 00:27:24.389 } 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:24.389 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2765949 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.390 rmmod nvme_tcp 00:27:24.390 rmmod nvme_fabrics 00:27:24.390 rmmod nvme_keyring 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2765824 ']' 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2765824 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2765824 ']' 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2765824 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765824 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765824' 00:27:24.390 killing process with pid 2765824 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2765824 00:27:24.390 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2765824 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.390 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.922 00:27:26.922 real 0m17.262s 00:27:26.922 user 0m20.727s 00:27:26.922 sys 0m5.688s 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.922 ************************************ 00:27:26.922 END TEST nvmf_host_discovery 00:27:26.922 ************************************ 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.922 ************************************ 00:27:26.922 START TEST nvmf_host_multipath_status 00:27:26.922 ************************************ 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:26.922 * Looking for test storage... 00:27:26.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:26.922 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:26.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.923 --rc genhtml_branch_coverage=1 00:27:26.923 --rc genhtml_function_coverage=1 00:27:26.923 --rc genhtml_legend=1 00:27:26.923 --rc geninfo_all_blocks=1 00:27:26.923 --rc geninfo_unexecuted_blocks=1 00:27:26.923 00:27:26.923 ' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:26.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.923 --rc genhtml_branch_coverage=1 00:27:26.923 --rc genhtml_function_coverage=1 00:27:26.923 --rc genhtml_legend=1 00:27:26.923 --rc geninfo_all_blocks=1 00:27:26.923 --rc geninfo_unexecuted_blocks=1 00:27:26.923 00:27:26.923 ' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:26.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.923 --rc genhtml_branch_coverage=1 00:27:26.923 --rc genhtml_function_coverage=1 00:27:26.923 --rc genhtml_legend=1 00:27:26.923 --rc geninfo_all_blocks=1 00:27:26.923 --rc geninfo_unexecuted_blocks=1 00:27:26.923 00:27:26.923 ' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:26.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.923 --rc genhtml_branch_coverage=1 00:27:26.923 --rc genhtml_function_coverage=1 00:27:26.923 --rc genhtml_legend=1 00:27:26.923 --rc geninfo_all_blocks=1 00:27:26.923 --rc geninfo_unexecuted_blocks=1 00:27:26.923 00:27:26.923 ' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:26.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.923 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:33.506 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:33.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:33.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:33.507 Found net devices under 0000:86:00.0: cvl_0_0 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:33.507 Found net devices under 0000:86:00.1: cvl_0_1 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.507 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.508 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:27:33.508 00:27:33.508 --- 10.0.0.2 ping statistics --- 00:27:33.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.508 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:33.508 00:27:33.508 --- 10.0.0.1 ping statistics --- 00:27:33.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.508 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2770947 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2770947 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2770947 ']' 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.508 [2024-12-09 10:38:10.354512] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:27:33.508 [2024-12-09 10:38:10.354562] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.508 [2024-12-09 10:38:10.434639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.508 [2024-12-09 10:38:10.476251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.508 [2024-12-09 10:38:10.476287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.508 [2024-12-09 10:38:10.476295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.508 [2024-12-09 10:38:10.476301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.508 [2024-12-09 10:38:10.476307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.508 [2024-12-09 10:38:10.477456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.508 [2024-12-09 10:38:10.477458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2770947 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:33.508 [2024-12-09 10:38:10.779082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.508 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:33.508 Malloc0 00:27:33.508 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:33.508 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.766 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.022 [2024-12-09 10:38:11.576407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.022 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:34.279 [2024-12-09 10:38:11.788969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2771243 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2771243 /var/tmp/bdevperf.sock 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2771243 ']' 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.279 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 10:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.537 10:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:34.537 10:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:34.794 10:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:35.051 Nvme0n1 00:27:35.308 10:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:35.565 Nvme0n1 00:27:35.565 10:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:35.565 10:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:38.092 10:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:38.092 10:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:38.092 10:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:38.092 10:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:39.026 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:39.026 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:39.026 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.026 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:39.284 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.284 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:39.284 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.284 10:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:39.542 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:39.542 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:39.542 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.542 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.801 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.058 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.058 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:40.058 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.058 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:40.317 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.317 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:40.317 10:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:40.575 10:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:40.832 10:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:41.765 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:41.765 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:41.765 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.765 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.023 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:42.023 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:42.023 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.023 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.297 10:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:42.553 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.553 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:42.553 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.553 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:42.810 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.810 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:42.810 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.810 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:43.068 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.068 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:43.068 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:43.068 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:43.325 10:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:44.696 10:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:44.696 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.953 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:45.210 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.210 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:45.210 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.210 10:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:45.467 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.467 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:45.467 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.467 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:45.723 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.724 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:45.724 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:45.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:45.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.374 10:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.631 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:47.888 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.888 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:47.889 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.889 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.146 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.146 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:48.146 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.146 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.403 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.403 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:48.403 10:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:48.660 10:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:48.660 10:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.031 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.288 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.288 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.288 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.288 10:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.545 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.545 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:50.545 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.545 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:50.803 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:51.060 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:51.316 10:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:52.246 10:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:52.246 10:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:52.246 10:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.246 10:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:52.503 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:52.503 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:52.503 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.503 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:52.759 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.759 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:52.759 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.759 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.016 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.272 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:53.272 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.272 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.272 10:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.529 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.529 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:53.786 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:53.786 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:54.042 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.042 10:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.411 10:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:55.669 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.669 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:55.669 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.669 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.926 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.183 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.183 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:56.183 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:56.183 10:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.441 10:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.441 10:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:56.441 10:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:56.698 10:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:56.956 10:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:57.887 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:57.887 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:57.887 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.887 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.144 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.144 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:58.144 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.144 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.401 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.401 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.401 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.401 10:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.401 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.401 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.401 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.401 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.657 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.657 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:58.657 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.657 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.914 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.914 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:58.914 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.914 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.171 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.171 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:59.171 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:59.171 10:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:59.428 10:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.799 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.056 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.057 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.313 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.313 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.313 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.313 10:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.571 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.571 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:01.571 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.571 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.861 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.861 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:01.861 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:02.119 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:02.119 10:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:03.494 10:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:03.494 10:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:03.494 10:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.494 10:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.494 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.494 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:03.494 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.494 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.751 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:03.752 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.752 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.036 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.036 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.036 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.036 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:04.321 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.321 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:04.321 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.321 10:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2771243 ']' 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771243' 00:28:04.629 killing process with pid 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2771243 00:28:04.629 { 00:28:04.629 "results": [ 00:28:04.629 { 00:28:04.629 "job": "Nvme0n1", 00:28:04.629 "core_mask": "0x4", 00:28:04.629 "workload": "verify", 00:28:04.629 "status": "terminated", 00:28:04.629 "verify_range": { 00:28:04.629 "start": 0, 00:28:04.629 "length": 16384 00:28:04.629 }, 00:28:04.629 "queue_depth": 128, 00:28:04.629 "io_size": 4096, 00:28:04.629 "runtime": 28.734988, 00:28:04.629 "iops": 10668.840369795875, 00:28:04.629 "mibps": 41.675157694515136, 00:28:04.629 "io_failed": 0, 00:28:04.629 "io_timeout": 0, 00:28:04.629 "avg_latency_us": 11977.016593825145, 00:28:04.629 "min_latency_us": 565.6380952380953, 00:28:04.629 "max_latency_us": 3019898.88 00:28:04.629 } 00:28:04.629 ], 00:28:04.629 "core_count": 1 00:28:04.629 } 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2771243 00:28:04.629 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.629 [2024-12-09 10:38:11.854233] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:28:04.629 [2024-12-09 10:38:11.854291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771243 ] 00:28:04.629 [2024-12-09 10:38:11.931165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.629 [2024-12-09 10:38:11.971483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.629 Running I/O for 90 seconds... 00:28:04.629 11511.00 IOPS, 44.96 MiB/s [2024-12-09T09:38:42.353Z] 11494.00 IOPS, 44.90 MiB/s [2024-12-09T09:38:42.353Z] 11504.67 IOPS, 44.94 MiB/s [2024-12-09T09:38:42.353Z] 11491.00 IOPS, 44.89 MiB/s [2024-12-09T09:38:42.353Z] 11490.60 IOPS, 44.89 MiB/s [2024-12-09T09:38:42.353Z] 11460.67 IOPS, 44.77 MiB/s [2024-12-09T09:38:42.354Z] 11451.00 IOPS, 44.73 MiB/s [2024-12-09T09:38:42.354Z] 11449.38 IOPS, 44.72 MiB/s [2024-12-09T09:38:42.354Z] 11455.22 IOPS, 44.75 MiB/s [2024-12-09T09:38:42.354Z] 11458.80 IOPS, 44.76 MiB/s [2024-12-09T09:38:42.354Z] 11459.18 IOPS, 44.76 MiB/s [2024-12-09T09:38:42.354Z] 11462.83 IOPS, 44.78 MiB/s [2024-12-09T09:38:42.354Z] [2024-12-09 10:38:26.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.630 [2024-12-09 10:38:26.121072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.630 [2024-12-09 10:38:26.121883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.121992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.121998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:04.630 [2024-12-09 10:38:26.122473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.630 [2024-12-09 10:38:26.122480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.122988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.122995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:04.631 [2024-12-09 10:38:26.123306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.631 [2024-12-09 10:38:26.123313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.123980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.123997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.632 [2024-12-09 10:38:26.124314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:04.632 [2024-12-09 10:38:26.124331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:26.124338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:26.124354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:26.124361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:26.124378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:26.124385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:04.633 11248.46 IOPS, 43.94 MiB/s [2024-12-09T09:38:42.357Z] 10445.00 IOPS, 40.80 MiB/s [2024-12-09T09:38:42.357Z] 9748.67 IOPS, 38.08 MiB/s [2024-12-09T09:38:42.357Z] 9321.00 IOPS, 36.41 MiB/s [2024-12-09T09:38:42.357Z] 9446.12 IOPS, 36.90 MiB/s [2024-12-09T09:38:42.357Z] 9553.78 IOPS, 37.32 MiB/s [2024-12-09T09:38:42.357Z] 9760.00 IOPS, 38.12 MiB/s [2024-12-09T09:38:42.357Z] 9957.90 IOPS, 38.90 MiB/s [2024-12-09T09:38:42.357Z] 10126.57 IOPS, 39.56 MiB/s [2024-12-09T09:38:42.357Z] 10182.14 IOPS, 39.77 MiB/s [2024-12-09T09:38:42.357Z] 10235.57 IOPS, 39.98 MiB/s [2024-12-09T09:38:42.357Z] 10308.92 IOPS, 40.27 MiB/s [2024-12-09T09:38:42.357Z] 10443.44 IOPS, 40.79 MiB/s [2024-12-09T09:38:42.357Z] 10564.12 IOPS, 41.27 MiB/s [2024-12-09T09:38:42.357Z] [2024-12-09 10:38:39.767233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.633 [2024-12-09 10:38:39.767403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.767984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.767990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.633 [2024-12-09 10:38:39.768106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:04.633 [2024-12-09 10:38:39.768494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.633 [2024-12-09 10:38:39.768501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:04.634 [2024-12-09 10:38:39.768513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.634 [2024-12-09 10:38:39.768520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:04.634 [2024-12-09 10:38:39.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.634 [2024-12-09 10:38:39.768539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:04.634 [2024-12-09 10:38:39.768551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.634 [2024-12-09 10:38:39.768558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:04.634 [2024-12-09 10:38:39.768572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.634 [2024-12-09 10:38:39.768579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.634 10618.22 IOPS, 41.48 MiB/s [2024-12-09T09:38:42.358Z] 10645.89 IOPS, 41.59 MiB/s [2024-12-09T09:38:42.358Z] Received shutdown signal, test time was about 28.735646 seconds 00:28:04.634 00:28:04.634 Latency(us) 00:28:04.634 [2024-12-09T09:38:42.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.634 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:04.634 Verification LBA range: start 0x0 length 0x4000 00:28:04.634 Nvme0n1 : 28.73 10668.84 41.68 0.00 0.00 11977.02 565.64 3019898.88 00:28:04.634 [2024-12-09T09:38:42.358Z] =================================================================================================================== 00:28:04.634 [2024-12-09T09:38:42.358Z] Total : 10668.84 41.68 0.00 0.00 11977.02 565.64 3019898.88 00:28:04.634 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.892 rmmod nvme_tcp 00:28:04.892 rmmod nvme_fabrics 00:28:04.892 rmmod nvme_keyring 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2770947 ']' 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2770947 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2770947 ']' 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2770947 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.892 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770947 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770947' 00:28:05.151 killing process with pid 2770947 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2770947 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2770947 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.151 10:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.688 00:28:07.688 real 0m40.704s 00:28:07.688 user 1m50.384s 00:28:07.688 sys 0m11.469s 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:07.688 ************************************ 00:28:07.688 END TEST nvmf_host_multipath_status 00:28:07.688 ************************************ 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.688 ************************************ 00:28:07.688 START TEST nvmf_discovery_remove_ifc 00:28:07.688 ************************************ 00:28:07.688 10:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:07.688 * Looking for test storage... 00:28:07.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.688 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.689 --rc genhtml_branch_coverage=1 00:28:07.689 --rc genhtml_function_coverage=1 00:28:07.689 --rc genhtml_legend=1 00:28:07.689 --rc geninfo_all_blocks=1 00:28:07.689 --rc geninfo_unexecuted_blocks=1 00:28:07.689 00:28:07.689 ' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.689 --rc genhtml_branch_coverage=1 00:28:07.689 --rc genhtml_function_coverage=1 00:28:07.689 --rc genhtml_legend=1 00:28:07.689 --rc geninfo_all_blocks=1 00:28:07.689 --rc geninfo_unexecuted_blocks=1 00:28:07.689 00:28:07.689 ' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.689 --rc genhtml_branch_coverage=1 00:28:07.689 --rc genhtml_function_coverage=1 00:28:07.689 --rc genhtml_legend=1 00:28:07.689 --rc geninfo_all_blocks=1 00:28:07.689 --rc geninfo_unexecuted_blocks=1 00:28:07.689 00:28:07.689 ' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.689 --rc genhtml_branch_coverage=1 00:28:07.689 --rc genhtml_function_coverage=1 00:28:07.689 --rc genhtml_legend=1 00:28:07.689 --rc geninfo_all_blocks=1 00:28:07.689 --rc geninfo_unexecuted_blocks=1 00:28:07.689 00:28:07.689 ' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.689 10:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:14.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:14.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:14.252 Found net devices under 0000:86:00.0: cvl_0_0 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:14.252 Found net devices under 0000:86:00.1: cvl_0_1 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.252 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.253 10:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:28:14.253 00:28:14.253 --- 10.0.0.2 ping statistics --- 00:28:14.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.253 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:28:14.253 00:28:14.253 --- 10.0.0.1 ping statistics --- 00:28:14.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.253 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2779833 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2779833 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2779833 ']' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 [2024-12-09 10:38:51.158878] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:28:14.253 [2024-12-09 10:38:51.158922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.253 [2024-12-09 10:38:51.221289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.253 [2024-12-09 10:38:51.262211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.253 [2024-12-09 10:38:51.262247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.253 [2024-12-09 10:38:51.262255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.253 [2024-12-09 10:38:51.262261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.253 [2024-12-09 10:38:51.262267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.253 [2024-12-09 10:38:51.262862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 [2024-12-09 10:38:51.415116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.253 [2024-12-09 10:38:51.423309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:14.253 null0 00:28:14.253 [2024-12-09 10:38:51.455283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2779852 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2779852 /tmp/host.sock 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2779852 ']' 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:14.253 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 [2024-12-09 10:38:51.524118] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:28:14.253 [2024-12-09 10:38:51.524158] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779852 ] 00:28:14.253 [2024-12-09 10:38:51.582699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.253 [2024-12-09 10:38:51.625242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.253 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:14.254 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.254 10:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.203 [2024-12-09 10:38:52.781247] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:15.203 [2024-12-09 10:38:52.781267] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:15.203 [2024-12-09 10:38:52.781278] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:15.203 [2024-12-09 10:38:52.867541] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:15.460 [2024-12-09 10:38:53.042537] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:15.460 [2024-12-09 10:38:53.043337] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1197940:1 started. 00:28:15.460 [2024-12-09 10:38:53.044654] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:15.460 [2024-12-09 10:38:53.044708] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:15.460 [2024-12-09 10:38:53.044728] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:15.460 [2024-12-09 10:38:53.044741] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:15.460 [2024-12-09 10:38:53.044760] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.460 [2024-12-09 10:38:53.050644] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1197940 was disconnected and freed. delete nvme_qpair. 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:15.460 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:15.718 10:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:16.649 10:38:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.581 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.838 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.839 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:17.839 10:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:18.773 10:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.710 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.969 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:19.969 10:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.905 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.906 [2024-12-09 10:38:58.486172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:20.906 [2024-12-09 10:38:58.486207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.906 [2024-12-09 10:38:58.486216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.906 [2024-12-09 10:38:58.486241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.906 [2024-12-09 10:38:58.486248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.906 [2024-12-09 10:38:58.486255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.906 [2024-12-09 10:38:58.486262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.906 [2024-12-09 10:38:58.486269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.906 [2024-12-09 10:38:58.486276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.906 [2024-12-09 10:38:58.486283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.906 [2024-12-09 10:38:58.486290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.906 [2024-12-09 10:38:58.486296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174160 is same with the state(6) to be set 00:28:20.906 [2024-12-09 10:38:58.496194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1174160 (9): Bad file descriptor 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:20.906 10:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.906 [2024-12-09 10:38:58.506228] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:20.906 [2024-12-09 10:38:58.506240] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:20.906 [2024-12-09 10:38:58.506246] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:20.906 [2024-12-09 10:38:58.506251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:20.906 [2024-12-09 10:38:58.506267] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.843 [2024-12-09 10:38:59.528859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:21.843 [2024-12-09 10:38:59.528938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1174160 with addr=10.0.0.2, port=4420 00:28:21.843 [2024-12-09 10:38:59.528972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174160 is same with the state(6) to be set 00:28:21.843 [2024-12-09 10:38:59.529031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1174160 (9): Bad file descriptor 00:28:21.843 [2024-12-09 10:38:59.529995] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:21.843 [2024-12-09 10:38:59.530059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:21.843 [2024-12-09 10:38:59.530082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:21.843 [2024-12-09 10:38:59.530104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:21.843 [2024-12-09 10:38:59.530124] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:21.843 [2024-12-09 10:38:59.530140] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:21.843 [2024-12-09 10:38:59.530154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:21.843 [2024-12-09 10:38:59.530174] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:21.843 [2024-12-09 10:38:59.530189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:21.843 10:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:23.219 [2024-12-09 10:39:00.532708] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:23.219 [2024-12-09 10:39:00.532735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:23.219 [2024-12-09 10:39:00.532749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:23.219 [2024-12-09 10:39:00.532757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:23.219 [2024-12-09 10:39:00.532764] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:23.219 [2024-12-09 10:39:00.532771] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:23.219 [2024-12-09 10:39:00.532776] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:23.219 [2024-12-09 10:39:00.532781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:23.219 [2024-12-09 10:39:00.532806] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:23.219 [2024-12-09 10:39:00.532835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.219 [2024-12-09 10:39:00.532845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.219 [2024-12-09 10:39:00.532860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.219 [2024-12-09 10:39:00.532867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.219 [2024-12-09 10:39:00.532875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.219 [2024-12-09 10:39:00.532882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.219 [2024-12-09 10:39:00.532889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.219 [2024-12-09 10:39:00.532895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.219 [2024-12-09 10:39:00.532903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.219 [2024-12-09 10:39:00.532910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.219 [2024-12-09 10:39:00.532917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:23.219 [2024-12-09 10:39:00.533191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1163450 (9): Bad file descriptor 00:28:23.219 [2024-12-09 10:39:00.534200] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:23.219 [2024-12-09 10:39:00.534210] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:23.219 10:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:24.154 10:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:25.088 [2024-12-09 10:39:02.587306] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:25.088 [2024-12-09 10:39:02.587325] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:25.088 [2024-12-09 10:39:02.587337] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:25.088 [2024-12-09 10:39:02.715717] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:25.088 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.346 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:25.346 10:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:25.346 [2024-12-09 10:39:02.939863] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:25.346 [2024-12-09 10:39:02.940491] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1148090:1 started. 00:28:25.346 [2024-12-09 10:39:02.941510] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:25.346 [2024-12-09 10:39:02.941541] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:25.346 [2024-12-09 10:39:02.941558] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:25.346 [2024-12-09 10:39:02.941571] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:25.346 [2024-12-09 10:39:02.941578] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:25.346 [2024-12-09 10:39:02.945937] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1148090 was disconnected and freed. delete nvme_qpair. 00:28:26.278 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.278 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.278 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.278 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.278 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2779852 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2779852 ']' 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2779852 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2779852 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2779852' 00:28:26.279 killing process with pid 2779852 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2779852 00:28:26.279 10:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2779852 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.536 rmmod nvme_tcp 00:28:26.536 rmmod nvme_fabrics 00:28:26.536 rmmod nvme_keyring 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:26.536 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2779833 ']' 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2779833 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2779833 ']' 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2779833 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2779833 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2779833' 00:28:26.537 killing process with pid 2779833 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2779833 00:28:26.537 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2779833 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.796 10:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.331 00:28:29.331 real 0m21.506s 00:28:29.331 user 0m26.767s 00:28:29.331 sys 0m5.874s 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.331 ************************************ 00:28:29.331 END TEST nvmf_discovery_remove_ifc 00:28:29.331 ************************************ 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.331 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.331 ************************************ 00:28:29.331 START TEST nvmf_identify_kernel_target 00:28:29.331 ************************************ 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:29.332 * Looking for test storage... 00:28:29.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.332 --rc genhtml_branch_coverage=1 00:28:29.332 --rc genhtml_function_coverage=1 00:28:29.332 --rc genhtml_legend=1 00:28:29.332 --rc geninfo_all_blocks=1 00:28:29.332 --rc geninfo_unexecuted_blocks=1 00:28:29.332 00:28:29.332 ' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.332 --rc genhtml_branch_coverage=1 00:28:29.332 --rc genhtml_function_coverage=1 00:28:29.332 --rc genhtml_legend=1 00:28:29.332 --rc geninfo_all_blocks=1 00:28:29.332 --rc geninfo_unexecuted_blocks=1 00:28:29.332 00:28:29.332 ' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.332 --rc genhtml_branch_coverage=1 00:28:29.332 --rc genhtml_function_coverage=1 00:28:29.332 --rc genhtml_legend=1 00:28:29.332 --rc geninfo_all_blocks=1 00:28:29.332 --rc geninfo_unexecuted_blocks=1 00:28:29.332 00:28:29.332 ' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:29.332 --rc genhtml_branch_coverage=1 00:28:29.332 --rc genhtml_function_coverage=1 00:28:29.332 --rc genhtml_legend=1 00:28:29.332 --rc geninfo_all_blocks=1 00:28:29.332 --rc geninfo_unexecuted_blocks=1 00:28:29.332 00:28:29.332 ' 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.332 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:29.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:29.333 10:39:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:34.607 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.607 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:34.607 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:34.866 Found net devices under 0000:86:00.0: cvl_0_0 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:34.866 Found net devices under 0000:86:00.1: cvl_0_1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:28:34.866 00:28:34.866 --- 10.0.0.2 ping statistics --- 00:28:34.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.866 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:28:34.866 00:28:34.866 --- 10.0.0.1 ping statistics --- 00:28:34.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.866 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.866 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:35.125 10:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:37.652 Waiting for block devices as requested 00:28:37.909 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:37.909 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:37.909 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:38.167 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:38.167 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:38.167 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:38.425 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:38.425 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:38.425 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:38.425 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:38.683 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:38.683 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:38.683 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:38.941 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:38.941 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:38.941 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:39.201 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:39.201 No valid GPT data, bailing 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:39.201 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:39.462 00:28:39.462 Discovery Log Number of Records 2, Generation counter 2 00:28:39.462 =====Discovery Log Entry 0====== 00:28:39.462 trtype: tcp 00:28:39.462 adrfam: ipv4 00:28:39.462 subtype: current discovery subsystem 00:28:39.462 treq: not specified, sq flow control disable supported 00:28:39.462 portid: 1 00:28:39.462 trsvcid: 4420 00:28:39.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:39.462 traddr: 10.0.0.1 00:28:39.462 eflags: none 00:28:39.462 sectype: none 00:28:39.462 =====Discovery Log Entry 1====== 00:28:39.462 trtype: tcp 00:28:39.462 adrfam: ipv4 00:28:39.462 subtype: nvme subsystem 00:28:39.462 treq: not specified, sq flow control disable supported 00:28:39.462 portid: 1 00:28:39.462 trsvcid: 4420 00:28:39.462 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:39.462 traddr: 10.0.0.1 00:28:39.462 eflags: none 00:28:39.462 sectype: none 00:28:39.462 10:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:39.462 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:39.462 ===================================================== 00:28:39.462 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:39.462 ===================================================== 00:28:39.462 Controller Capabilities/Features 00:28:39.462 ================================ 00:28:39.462 Vendor ID: 0000 00:28:39.462 Subsystem Vendor ID: 0000 00:28:39.462 Serial Number: b825d1aebc245b7bd71c 00:28:39.462 Model Number: Linux 00:28:39.462 Firmware Version: 6.8.9-20 00:28:39.462 Recommended Arb Burst: 0 00:28:39.462 IEEE OUI Identifier: 00 00 00 00:28:39.462 Multi-path I/O 00:28:39.462 May have multiple subsystem ports: No 00:28:39.462 May have multiple controllers: No 00:28:39.462 Associated with SR-IOV VF: No 00:28:39.462 Max Data Transfer Size: Unlimited 00:28:39.462 Max Number of Namespaces: 0 00:28:39.462 Max Number of I/O Queues: 1024 00:28:39.462 NVMe Specification Version (VS): 1.3 00:28:39.462 NVMe Specification Version (Identify): 1.3 00:28:39.462 Maximum Queue Entries: 1024 00:28:39.462 Contiguous Queues Required: No 00:28:39.462 Arbitration Mechanisms Supported 00:28:39.462 Weighted Round Robin: Not Supported 00:28:39.462 Vendor Specific: Not Supported 00:28:39.462 Reset Timeout: 7500 ms 00:28:39.462 Doorbell Stride: 4 bytes 00:28:39.462 NVM Subsystem Reset: Not Supported 00:28:39.462 Command Sets Supported 00:28:39.462 NVM Command Set: Supported 00:28:39.462 Boot Partition: Not Supported 00:28:39.462 Memory Page Size Minimum: 4096 bytes 00:28:39.462 Memory Page Size Maximum: 4096 bytes 00:28:39.462 Persistent Memory Region: Not Supported 00:28:39.462 Optional Asynchronous Events Supported 00:28:39.462 Namespace Attribute Notices: Not Supported 00:28:39.462 Firmware Activation Notices: Not Supported 00:28:39.462 ANA Change Notices: Not Supported 00:28:39.462 PLE Aggregate Log Change Notices: Not Supported 00:28:39.462 LBA Status Info Alert Notices: Not Supported 00:28:39.462 EGE Aggregate Log Change Notices: Not Supported 00:28:39.462 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.462 Zone Descriptor Change Notices: Not Supported 00:28:39.462 Discovery Log Change Notices: Supported 00:28:39.462 Controller Attributes 00:28:39.462 128-bit Host Identifier: Not Supported 00:28:39.463 Non-Operational Permissive Mode: Not Supported 00:28:39.463 NVM Sets: Not Supported 00:28:39.463 Read Recovery Levels: Not Supported 00:28:39.463 Endurance Groups: Not Supported 00:28:39.463 Predictable Latency Mode: Not Supported 00:28:39.463 Traffic Based Keep ALive: Not Supported 00:28:39.463 Namespace Granularity: Not Supported 00:28:39.463 SQ Associations: Not Supported 00:28:39.463 UUID List: Not Supported 00:28:39.463 Multi-Domain Subsystem: Not Supported 00:28:39.463 Fixed Capacity Management: Not Supported 00:28:39.463 Variable Capacity Management: Not Supported 00:28:39.463 Delete Endurance Group: Not Supported 00:28:39.463 Delete NVM Set: Not Supported 00:28:39.463 Extended LBA Formats Supported: Not Supported 00:28:39.463 Flexible Data Placement Supported: Not Supported 00:28:39.463 00:28:39.463 Controller Memory Buffer Support 00:28:39.463 ================================ 00:28:39.463 Supported: No 00:28:39.463 00:28:39.463 Persistent Memory Region Support 00:28:39.463 ================================ 00:28:39.463 Supported: No 00:28:39.463 00:28:39.463 Admin Command Set Attributes 00:28:39.463 ============================ 00:28:39.463 Security Send/Receive: Not Supported 00:28:39.463 Format NVM: Not Supported 00:28:39.463 Firmware Activate/Download: Not Supported 00:28:39.463 Namespace Management: Not Supported 00:28:39.463 Device Self-Test: Not Supported 00:28:39.463 Directives: Not Supported 00:28:39.463 NVMe-MI: Not Supported 00:28:39.463 Virtualization Management: Not Supported 00:28:39.463 Doorbell Buffer Config: Not Supported 00:28:39.463 Get LBA Status Capability: Not Supported 00:28:39.463 Command & Feature Lockdown Capability: Not Supported 00:28:39.463 Abort Command Limit: 1 00:28:39.463 Async Event Request Limit: 1 00:28:39.463 Number of Firmware Slots: N/A 00:28:39.463 Firmware Slot 1 Read-Only: N/A 00:28:39.463 Firmware Activation Without Reset: N/A 00:28:39.463 Multiple Update Detection Support: N/A 00:28:39.463 Firmware Update Granularity: No Information Provided 00:28:39.463 Per-Namespace SMART Log: No 00:28:39.463 Asymmetric Namespace Access Log Page: Not Supported 00:28:39.463 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:39.463 Command Effects Log Page: Not Supported 00:28:39.463 Get Log Page Extended Data: Supported 00:28:39.463 Telemetry Log Pages: Not Supported 00:28:39.463 Persistent Event Log Pages: Not Supported 00:28:39.463 Supported Log Pages Log Page: May Support 00:28:39.463 Commands Supported & Effects Log Page: Not Supported 00:28:39.463 Feature Identifiers & Effects Log Page:May Support 00:28:39.463 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.463 Data Area 4 for Telemetry Log: Not Supported 00:28:39.463 Error Log Page Entries Supported: 1 00:28:39.463 Keep Alive: Not Supported 00:28:39.463 00:28:39.463 NVM Command Set Attributes 00:28:39.463 ========================== 00:28:39.463 Submission Queue Entry Size 00:28:39.463 Max: 1 00:28:39.463 Min: 1 00:28:39.463 Completion Queue Entry Size 00:28:39.463 Max: 1 00:28:39.463 Min: 1 00:28:39.463 Number of Namespaces: 0 00:28:39.463 Compare Command: Not Supported 00:28:39.463 Write Uncorrectable Command: Not Supported 00:28:39.463 Dataset Management Command: Not Supported 00:28:39.463 Write Zeroes Command: Not Supported 00:28:39.463 Set Features Save Field: Not Supported 00:28:39.463 Reservations: Not Supported 00:28:39.463 Timestamp: Not Supported 00:28:39.463 Copy: Not Supported 00:28:39.463 Volatile Write Cache: Not Present 00:28:39.463 Atomic Write Unit (Normal): 1 00:28:39.463 Atomic Write Unit (PFail): 1 00:28:39.463 Atomic Compare & Write Unit: 1 00:28:39.463 Fused Compare & Write: Not Supported 00:28:39.463 Scatter-Gather List 00:28:39.463 SGL Command Set: Supported 00:28:39.463 SGL Keyed: Not Supported 00:28:39.463 SGL Bit Bucket Descriptor: Not Supported 00:28:39.463 SGL Metadata Pointer: Not Supported 00:28:39.463 Oversized SGL: Not Supported 00:28:39.463 SGL Metadata Address: Not Supported 00:28:39.463 SGL Offset: Supported 00:28:39.463 Transport SGL Data Block: Not Supported 00:28:39.463 Replay Protected Memory Block: Not Supported 00:28:39.463 00:28:39.463 Firmware Slot Information 00:28:39.463 ========================= 00:28:39.463 Active slot: 0 00:28:39.463 00:28:39.463 00:28:39.463 Error Log 00:28:39.463 ========= 00:28:39.463 00:28:39.463 Active Namespaces 00:28:39.463 ================= 00:28:39.463 Discovery Log Page 00:28:39.463 ================== 00:28:39.463 Generation Counter: 2 00:28:39.463 Number of Records: 2 00:28:39.463 Record Format: 0 00:28:39.463 00:28:39.463 Discovery Log Entry 0 00:28:39.463 ---------------------- 00:28:39.463 Transport Type: 3 (TCP) 00:28:39.463 Address Family: 1 (IPv4) 00:28:39.463 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:39.463 Entry Flags: 00:28:39.463 Duplicate Returned Information: 0 00:28:39.463 Explicit Persistent Connection Support for Discovery: 0 00:28:39.463 Transport Requirements: 00:28:39.463 Secure Channel: Not Specified 00:28:39.463 Port ID: 1 (0x0001) 00:28:39.463 Controller ID: 65535 (0xffff) 00:28:39.463 Admin Max SQ Size: 32 00:28:39.463 Transport Service Identifier: 4420 00:28:39.463 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:39.463 Transport Address: 10.0.0.1 00:28:39.463 Discovery Log Entry 1 00:28:39.463 ---------------------- 00:28:39.463 Transport Type: 3 (TCP) 00:28:39.463 Address Family: 1 (IPv4) 00:28:39.463 Subsystem Type: 2 (NVM Subsystem) 00:28:39.463 Entry Flags: 00:28:39.463 Duplicate Returned Information: 0 00:28:39.463 Explicit Persistent Connection Support for Discovery: 0 00:28:39.463 Transport Requirements: 00:28:39.463 Secure Channel: Not Specified 00:28:39.463 Port ID: 1 (0x0001) 00:28:39.463 Controller ID: 65535 (0xffff) 00:28:39.463 Admin Max SQ Size: 32 00:28:39.463 Transport Service Identifier: 4420 00:28:39.464 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:39.464 Transport Address: 10.0.0.1 00:28:39.464 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:39.464 get_feature(0x01) failed 00:28:39.464 get_feature(0x02) failed 00:28:39.464 get_feature(0x04) failed 00:28:39.464 ===================================================== 00:28:39.464 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.464 ===================================================== 00:28:39.464 Controller Capabilities/Features 00:28:39.464 ================================ 00:28:39.464 Vendor ID: 0000 00:28:39.464 Subsystem Vendor ID: 0000 00:28:39.464 Serial Number: 0f1dcc06515305ed68ad 00:28:39.464 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:39.464 Firmware Version: 6.8.9-20 00:28:39.464 Recommended Arb Burst: 6 00:28:39.464 IEEE OUI Identifier: 00 00 00 00:28:39.464 Multi-path I/O 00:28:39.464 May have multiple subsystem ports: Yes 00:28:39.464 May have multiple controllers: Yes 00:28:39.464 Associated with SR-IOV VF: No 00:28:39.464 Max Data Transfer Size: Unlimited 00:28:39.464 Max Number of Namespaces: 1024 00:28:39.464 Max Number of I/O Queues: 128 00:28:39.464 NVMe Specification Version (VS): 1.3 00:28:39.464 NVMe Specification Version (Identify): 1.3 00:28:39.464 Maximum Queue Entries: 1024 00:28:39.464 Contiguous Queues Required: No 00:28:39.464 Arbitration Mechanisms Supported 00:28:39.464 Weighted Round Robin: Not Supported 00:28:39.464 Vendor Specific: Not Supported 00:28:39.464 Reset Timeout: 7500 ms 00:28:39.464 Doorbell Stride: 4 bytes 00:28:39.464 NVM Subsystem Reset: Not Supported 00:28:39.464 Command Sets Supported 00:28:39.464 NVM Command Set: Supported 00:28:39.464 Boot Partition: Not Supported 00:28:39.464 Memory Page Size Minimum: 4096 bytes 00:28:39.464 Memory Page Size Maximum: 4096 bytes 00:28:39.464 Persistent Memory Region: Not Supported 00:28:39.464 Optional Asynchronous Events Supported 00:28:39.464 Namespace Attribute Notices: Supported 00:28:39.464 Firmware Activation Notices: Not Supported 00:28:39.464 ANA Change Notices: Supported 00:28:39.464 PLE Aggregate Log Change Notices: Not Supported 00:28:39.464 LBA Status Info Alert Notices: Not Supported 00:28:39.464 EGE Aggregate Log Change Notices: Not Supported 00:28:39.464 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.464 Zone Descriptor Change Notices: Not Supported 00:28:39.464 Discovery Log Change Notices: Not Supported 00:28:39.464 Controller Attributes 00:28:39.464 128-bit Host Identifier: Supported 00:28:39.464 Non-Operational Permissive Mode: Not Supported 00:28:39.464 NVM Sets: Not Supported 00:28:39.464 Read Recovery Levels: Not Supported 00:28:39.464 Endurance Groups: Not Supported 00:28:39.464 Predictable Latency Mode: Not Supported 00:28:39.464 Traffic Based Keep ALive: Supported 00:28:39.464 Namespace Granularity: Not Supported 00:28:39.464 SQ Associations: Not Supported 00:28:39.464 UUID List: Not Supported 00:28:39.464 Multi-Domain Subsystem: Not Supported 00:28:39.464 Fixed Capacity Management: Not Supported 00:28:39.464 Variable Capacity Management: Not Supported 00:28:39.464 Delete Endurance Group: Not Supported 00:28:39.464 Delete NVM Set: Not Supported 00:28:39.464 Extended LBA Formats Supported: Not Supported 00:28:39.464 Flexible Data Placement Supported: Not Supported 00:28:39.464 00:28:39.464 Controller Memory Buffer Support 00:28:39.464 ================================ 00:28:39.464 Supported: No 00:28:39.464 00:28:39.464 Persistent Memory Region Support 00:28:39.464 ================================ 00:28:39.464 Supported: No 00:28:39.464 00:28:39.464 Admin Command Set Attributes 00:28:39.464 ============================ 00:28:39.464 Security Send/Receive: Not Supported 00:28:39.464 Format NVM: Not Supported 00:28:39.464 Firmware Activate/Download: Not Supported 00:28:39.464 Namespace Management: Not Supported 00:28:39.464 Device Self-Test: Not Supported 00:28:39.464 Directives: Not Supported 00:28:39.464 NVMe-MI: Not Supported 00:28:39.464 Virtualization Management: Not Supported 00:28:39.464 Doorbell Buffer Config: Not Supported 00:28:39.464 Get LBA Status Capability: Not Supported 00:28:39.464 Command & Feature Lockdown Capability: Not Supported 00:28:39.464 Abort Command Limit: 4 00:28:39.464 Async Event Request Limit: 4 00:28:39.464 Number of Firmware Slots: N/A 00:28:39.464 Firmware Slot 1 Read-Only: N/A 00:28:39.464 Firmware Activation Without Reset: N/A 00:28:39.464 Multiple Update Detection Support: N/A 00:28:39.464 Firmware Update Granularity: No Information Provided 00:28:39.464 Per-Namespace SMART Log: Yes 00:28:39.464 Asymmetric Namespace Access Log Page: Supported 00:28:39.464 ANA Transition Time : 10 sec 00:28:39.464 00:28:39.464 Asymmetric Namespace Access Capabilities 00:28:39.464 ANA Optimized State : Supported 00:28:39.464 ANA Non-Optimized State : Supported 00:28:39.464 ANA Inaccessible State : Supported 00:28:39.464 ANA Persistent Loss State : Supported 00:28:39.464 ANA Change State : Supported 00:28:39.464 ANAGRPID is not changed : No 00:28:39.464 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:39.464 00:28:39.464 ANA Group Identifier Maximum : 128 00:28:39.464 Number of ANA Group Identifiers : 128 00:28:39.464 Max Number of Allowed Namespaces : 1024 00:28:39.464 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:39.464 Command Effects Log Page: Supported 00:28:39.464 Get Log Page Extended Data: Supported 00:28:39.464 Telemetry Log Pages: Not Supported 00:28:39.464 Persistent Event Log Pages: Not Supported 00:28:39.464 Supported Log Pages Log Page: May Support 00:28:39.464 Commands Supported & Effects Log Page: Not Supported 00:28:39.464 Feature Identifiers & Effects Log Page:May Support 00:28:39.464 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.464 Data Area 4 for Telemetry Log: Not Supported 00:28:39.464 Error Log Page Entries Supported: 128 00:28:39.464 Keep Alive: Supported 00:28:39.464 Keep Alive Granularity: 1000 ms 00:28:39.464 00:28:39.464 NVM Command Set Attributes 00:28:39.464 ========================== 00:28:39.464 Submission Queue Entry Size 00:28:39.464 Max: 64 00:28:39.464 Min: 64 00:28:39.464 Completion Queue Entry Size 00:28:39.465 Max: 16 00:28:39.465 Min: 16 00:28:39.465 Number of Namespaces: 1024 00:28:39.465 Compare Command: Not Supported 00:28:39.465 Write Uncorrectable Command: Not Supported 00:28:39.465 Dataset Management Command: Supported 00:28:39.465 Write Zeroes Command: Supported 00:28:39.465 Set Features Save Field: Not Supported 00:28:39.465 Reservations: Not Supported 00:28:39.465 Timestamp: Not Supported 00:28:39.465 Copy: Not Supported 00:28:39.465 Volatile Write Cache: Present 00:28:39.465 Atomic Write Unit (Normal): 1 00:28:39.465 Atomic Write Unit (PFail): 1 00:28:39.465 Atomic Compare & Write Unit: 1 00:28:39.465 Fused Compare & Write: Not Supported 00:28:39.465 Scatter-Gather List 00:28:39.465 SGL Command Set: Supported 00:28:39.465 SGL Keyed: Not Supported 00:28:39.465 SGL Bit Bucket Descriptor: Not Supported 00:28:39.465 SGL Metadata Pointer: Not Supported 00:28:39.465 Oversized SGL: Not Supported 00:28:39.465 SGL Metadata Address: Not Supported 00:28:39.465 SGL Offset: Supported 00:28:39.465 Transport SGL Data Block: Not Supported 00:28:39.465 Replay Protected Memory Block: Not Supported 00:28:39.465 00:28:39.465 Firmware Slot Information 00:28:39.465 ========================= 00:28:39.465 Active slot: 0 00:28:39.465 00:28:39.465 Asymmetric Namespace Access 00:28:39.465 =========================== 00:28:39.465 Change Count : 0 00:28:39.465 Number of ANA Group Descriptors : 1 00:28:39.465 ANA Group Descriptor : 0 00:28:39.465 ANA Group ID : 1 00:28:39.465 Number of NSID Values : 1 00:28:39.465 Change Count : 0 00:28:39.465 ANA State : 1 00:28:39.465 Namespace Identifier : 1 00:28:39.465 00:28:39.465 Commands Supported and Effects 00:28:39.465 ============================== 00:28:39.465 Admin Commands 00:28:39.465 -------------- 00:28:39.465 Get Log Page (02h): Supported 00:28:39.465 Identify (06h): Supported 00:28:39.465 Abort (08h): Supported 00:28:39.465 Set Features (09h): Supported 00:28:39.465 Get Features (0Ah): Supported 00:28:39.465 Asynchronous Event Request (0Ch): Supported 00:28:39.465 Keep Alive (18h): Supported 00:28:39.465 I/O Commands 00:28:39.465 ------------ 00:28:39.465 Flush (00h): Supported 00:28:39.465 Write (01h): Supported LBA-Change 00:28:39.465 Read (02h): Supported 00:28:39.465 Write Zeroes (08h): Supported LBA-Change 00:28:39.465 Dataset Management (09h): Supported 00:28:39.465 00:28:39.465 Error Log 00:28:39.465 ========= 00:28:39.465 Entry: 0 00:28:39.465 Error Count: 0x3 00:28:39.465 Submission Queue Id: 0x0 00:28:39.465 Command Id: 0x5 00:28:39.465 Phase Bit: 0 00:28:39.465 Status Code: 0x2 00:28:39.465 Status Code Type: 0x0 00:28:39.465 Do Not Retry: 1 00:28:39.465 Error Location: 0x28 00:28:39.465 LBA: 0x0 00:28:39.465 Namespace: 0x0 00:28:39.465 Vendor Log Page: 0x0 00:28:39.465 ----------- 00:28:39.465 Entry: 1 00:28:39.465 Error Count: 0x2 00:28:39.465 Submission Queue Id: 0x0 00:28:39.465 Command Id: 0x5 00:28:39.465 Phase Bit: 0 00:28:39.465 Status Code: 0x2 00:28:39.465 Status Code Type: 0x0 00:28:39.465 Do Not Retry: 1 00:28:39.465 Error Location: 0x28 00:28:39.465 LBA: 0x0 00:28:39.465 Namespace: 0x0 00:28:39.465 Vendor Log Page: 0x0 00:28:39.465 ----------- 00:28:39.465 Entry: 2 00:28:39.465 Error Count: 0x1 00:28:39.465 Submission Queue Id: 0x0 00:28:39.465 Command Id: 0x4 00:28:39.465 Phase Bit: 0 00:28:39.465 Status Code: 0x2 00:28:39.465 Status Code Type: 0x0 00:28:39.465 Do Not Retry: 1 00:28:39.465 Error Location: 0x28 00:28:39.465 LBA: 0x0 00:28:39.465 Namespace: 0x0 00:28:39.465 Vendor Log Page: 0x0 00:28:39.465 00:28:39.465 Number of Queues 00:28:39.465 ================ 00:28:39.465 Number of I/O Submission Queues: 128 00:28:39.465 Number of I/O Completion Queues: 128 00:28:39.465 00:28:39.465 ZNS Specific Controller Data 00:28:39.465 ============================ 00:28:39.465 Zone Append Size Limit: 0 00:28:39.465 00:28:39.465 00:28:39.465 Active Namespaces 00:28:39.465 ================= 00:28:39.465 get_feature(0x05) failed 00:28:39.465 Namespace ID:1 00:28:39.465 Command Set Identifier: NVM (00h) 00:28:39.465 Deallocate: Supported 00:28:39.465 Deallocated/Unwritten Error: Not Supported 00:28:39.465 Deallocated Read Value: Unknown 00:28:39.465 Deallocate in Write Zeroes: Not Supported 00:28:39.465 Deallocated Guard Field: 0xFFFF 00:28:39.465 Flush: Supported 00:28:39.465 Reservation: Not Supported 00:28:39.465 Namespace Sharing Capabilities: Multiple Controllers 00:28:39.465 Size (in LBAs): 3125627568 (1490GiB) 00:28:39.465 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:39.465 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:39.465 UUID: e9633785-eaa7-4a32-a183-73e2b2e94014 00:28:39.465 Thin Provisioning: Not Supported 00:28:39.465 Per-NS Atomic Units: Yes 00:28:39.465 Atomic Boundary Size (Normal): 0 00:28:39.465 Atomic Boundary Size (PFail): 0 00:28:39.465 Atomic Boundary Offset: 0 00:28:39.465 NGUID/EUI64 Never Reused: No 00:28:39.465 ANA group ID: 1 00:28:39.465 Namespace Write Protected: No 00:28:39.465 Number of LBA Formats: 1 00:28:39.465 Current LBA Format: LBA Format #00 00:28:39.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:39.465 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.465 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.465 rmmod nvme_tcp 00:28:39.725 rmmod nvme_fabrics 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.725 10:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.629 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:41.630 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:41.888 10:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:44.423 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:44.727 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:46.104 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:46.363 00:28:46.363 real 0m17.334s 00:28:46.363 user 0m4.256s 00:28:46.363 sys 0m8.788s 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.363 ************************************ 00:28:46.363 END TEST nvmf_identify_kernel_target 00:28:46.363 ************************************ 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.363 ************************************ 00:28:46.363 START TEST nvmf_auth_host 00:28:46.363 ************************************ 00:28:46.363 10:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:46.363 * Looking for test storage... 00:28:46.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.622 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.622 --rc genhtml_branch_coverage=1 00:28:46.622 --rc genhtml_function_coverage=1 00:28:46.622 --rc genhtml_legend=1 00:28:46.622 --rc geninfo_all_blocks=1 00:28:46.623 --rc geninfo_unexecuted_blocks=1 00:28:46.623 00:28:46.623 ' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.623 --rc genhtml_branch_coverage=1 00:28:46.623 --rc genhtml_function_coverage=1 00:28:46.623 --rc genhtml_legend=1 00:28:46.623 --rc geninfo_all_blocks=1 00:28:46.623 --rc geninfo_unexecuted_blocks=1 00:28:46.623 00:28:46.623 ' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.623 --rc genhtml_branch_coverage=1 00:28:46.623 --rc genhtml_function_coverage=1 00:28:46.623 --rc genhtml_legend=1 00:28:46.623 --rc geninfo_all_blocks=1 00:28:46.623 --rc geninfo_unexecuted_blocks=1 00:28:46.623 00:28:46.623 ' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:46.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.623 --rc genhtml_branch_coverage=1 00:28:46.623 --rc genhtml_function_coverage=1 00:28:46.623 --rc genhtml_legend=1 00:28:46.623 --rc geninfo_all_blocks=1 00:28:46.623 --rc geninfo_unexecuted_blocks=1 00:28:46.623 00:28:46.623 ' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.623 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.624 10:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.210 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.211 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.211 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.211 10:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:28:53.211 00:28:53.211 --- 10.0.0.2 ping statistics --- 00:28:53.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.211 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:53.211 00:28:53.211 --- 10.0.0.1 ping statistics --- 00:28:53.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.211 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2792586 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2792586 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2792586 ']' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e7233811d556a58d5c7a6963afa16b3 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jWr 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e7233811d556a58d5c7a6963afa16b3 0 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e7233811d556a58d5c7a6963afa16b3 0 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.211 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e7233811d556a58d5c7a6963afa16b3 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jWr 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jWr 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jWr 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba9e5865869fefba771e66252c00e15e5fb27818962e9fb38ba819063cc78511 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.t1G 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba9e5865869fefba771e66252c00e15e5fb27818962e9fb38ba819063cc78511 3 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba9e5865869fefba771e66252c00e15e5fb27818962e9fb38ba819063cc78511 3 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba9e5865869fefba771e66252c00e15e5fb27818962e9fb38ba819063cc78511 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.t1G 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.t1G 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.t1G 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22e29698984d8414b9b1267ab3566211a44d88bf83811a5e 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HyR 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22e29698984d8414b9b1267ab3566211a44d88bf83811a5e 0 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22e29698984d8414b9b1267ab3566211a44d88bf83811a5e 0 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22e29698984d8414b9b1267ab3566211a44d88bf83811a5e 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HyR 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HyR 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HyR 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1f60bb730fe1eb9fbdc140515517a8a9ff13d2f365d0f8b1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ICm 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1f60bb730fe1eb9fbdc140515517a8a9ff13d2f365d0f8b1 2 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1f60bb730fe1eb9fbdc140515517a8a9ff13d2f365d0f8b1 2 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1f60bb730fe1eb9fbdc140515517a8a9ff13d2f365d0f8b1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ICm 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ICm 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ICm 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d662366e853af85c02f886f711ac03dd 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rxP 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d662366e853af85c02f886f711ac03dd 1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d662366e853af85c02f886f711ac03dd 1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d662366e853af85c02f886f711ac03dd 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rxP 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rxP 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rxP 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=594384bff9827c98adb61d5dd58f70a5 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9uM 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 594384bff9827c98adb61d5dd58f70a5 1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 594384bff9827c98adb61d5dd58f70a5 1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=594384bff9827c98adb61d5dd58f70a5 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9uM 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9uM 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9uM 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:53.212 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b89f871c415286ca48875b9c025da0eab83538de3461b3bb 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tTM 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b89f871c415286ca48875b9c025da0eab83538de3461b3bb 2 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b89f871c415286ca48875b9c025da0eab83538de3461b3bb 2 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b89f871c415286ca48875b9c025da0eab83538de3461b3bb 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tTM 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tTM 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tTM 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=294e40bc1711ce5e9f7489aa16b1728a 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3YI 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 294e40bc1711ce5e9f7489aa16b1728a 0 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 294e40bc1711ce5e9f7489aa16b1728a 0 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=294e40bc1711ce5e9f7489aa16b1728a 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3YI 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3YI 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3YI 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df4198e095a95323e9d89edceded70b62c37b458dcb73cb477f2b07c8aac514b 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NUN 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df4198e095a95323e9d89edceded70b62c37b458dcb73cb477f2b07c8aac514b 3 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df4198e095a95323e9d89edceded70b62c37b458dcb73cb477f2b07c8aac514b 3 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df4198e095a95323e9d89edceded70b62c37b458dcb73cb477f2b07c8aac514b 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NUN 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NUN 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NUN 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2792586 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2792586 ']' 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.213 10:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jWr 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.t1G ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t1G 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HyR 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ICm ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ICm 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rxP 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9uM ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9uM 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tTM 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3YI ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3YI 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NUN 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:53.471 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:53.730 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:53.730 10:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:56.264 Waiting for block devices as requested 00:28:56.264 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:56.522 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:56.522 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:56.522 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:56.522 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:56.781 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:56.781 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:56.781 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:56.781 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:57.039 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:57.039 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:57.039 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:57.039 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:57.298 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:57.298 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:57.298 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:57.555 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:58.121 No valid GPT data, bailing 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:58.121 00:28:58.121 Discovery Log Number of Records 2, Generation counter 2 00:28:58.121 =====Discovery Log Entry 0====== 00:28:58.121 trtype: tcp 00:28:58.121 adrfam: ipv4 00:28:58.121 subtype: current discovery subsystem 00:28:58.121 treq: not specified, sq flow control disable supported 00:28:58.121 portid: 1 00:28:58.121 trsvcid: 4420 00:28:58.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:58.121 traddr: 10.0.0.1 00:28:58.121 eflags: none 00:28:58.121 sectype: none 00:28:58.121 =====Discovery Log Entry 1====== 00:28:58.121 trtype: tcp 00:28:58.121 adrfam: ipv4 00:28:58.121 subtype: nvme subsystem 00:28:58.121 treq: not specified, sq flow control disable supported 00:28:58.121 portid: 1 00:28:58.121 trsvcid: 4420 00:28:58.121 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:58.121 traddr: 10.0.0.1 00:28:58.121 eflags: none 00:28:58.121 sectype: none 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.121 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.383 nvme0n1 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.383 10:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.383 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.659 nvme0n1 00:28:58.659 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.659 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.659 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.659 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.660 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 nvme0n1 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 nvme0n1 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.946 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.232 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.233 nvme0n1 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.233 10:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.492 nvme0n1 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.492 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.493 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.751 nvme0n1 00:28:59.751 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.751 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.752 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.011 nvme0n1 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.011 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.269 nvme0n1 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.269 10:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.528 nvme0n1 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.528 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.787 nvme0n1 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.787 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.046 nvme0n1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.046 10:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.305 nvme0n1 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.305 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.563 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.820 nvme0n1 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.820 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.821 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.079 nvme0n1 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.079 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.339 nvme0n1 00:29:02.339 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.339 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.339 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.339 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.339 10:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:02.339 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.596 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.597 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 nvme0n1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.855 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.421 nvme0n1 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.421 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.422 10:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.680 nvme0n1 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.680 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.939 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.198 nvme0n1 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.198 10:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.767 nvme0n1 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.767 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.337 nvme0n1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.337 10:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.904 nvme0n1 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:05.904 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.905 10:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.472 nvme0n1 00:29:06.472 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.472 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.472 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.472 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.472 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.732 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.302 nvme0n1 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.302 10:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.869 nvme0n1 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.869 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.870 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.129 nvme0n1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.129 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.388 nvme0n1 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.388 10:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.388 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 nvme0n1 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.648 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 nvme0n1 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.907 nvme0n1 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.907 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 nvme0n1 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.423 10:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 nvme0n1 00:29:09.423 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.423 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.423 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.423 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.424 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.424 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.682 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.682 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.682 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.682 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.683 nvme0n1 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.683 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.942 nvme0n1 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.942 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.201 nvme0n1 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.201 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.460 10:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.718 nvme0n1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.718 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.976 nvme0n1 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.976 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.977 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.236 nvme0n1 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.236 10:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.493 nvme0n1 00:29:11.493 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.493 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.493 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.493 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.494 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.494 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:11.750 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.751 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.009 nvme0n1 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.009 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.268 nvme0n1 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.268 10:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.526 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.783 nvme0n1 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.783 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.784 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.041 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 nvme0n1 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.299 10:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.864 nvme0n1 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.864 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.865 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.865 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:13.865 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.865 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.122 nvme0n1 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.122 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.379 10:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.943 nvme0n1 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:14.943 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.944 10:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.508 nvme0n1 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.508 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.509 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.073 nvme0n1 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.073 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.330 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.330 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.331 10:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.896 nvme0n1 00:29:16.896 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.897 10:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 nvme0n1 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.463 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.722 nvme0n1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.722 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 nvme0n1 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 nvme0n1 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.995 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.253 nvme0n1 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.253 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.511 nvme0n1 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.511 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.769 nvme0n1 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.770 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 nvme0n1 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.028 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.029 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.029 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.029 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.029 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.287 nvme0n1 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.287 10:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.287 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.545 nvme0n1 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.545 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.546 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 nvme0n1 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.804 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.805 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.062 nvme0n1 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.062 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.320 10:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.578 nvme0n1 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.578 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.836 nvme0n1 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.836 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.094 nvme0n1 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.094 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.352 10:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.610 nvme0n1 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.610 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.611 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 nvme0n1 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.921 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.922 10:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.484 nvme0n1 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.484 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.485 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.051 nvme0n1 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.051 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.309 nvme0n1 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.309 10:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.309 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.874 nvme0n1 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWU3MjMzODExZDU1NmE1OGQ1YzdhNjk2M2FmYTE2YjMW8d06: 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmE5ZTU4NjU4NjlmZWZiYTc3MWU2NjI1MmMwMGUxNWU1ZmIyNzgxODk2MmU5ZmIzOGJhODE5MDYzY2M3ODUxMfkkb1U=: 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.874 10:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.440 nvme0n1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.440 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.005 nvme0n1 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.005 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.263 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.264 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.264 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.264 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.264 10:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.829 nvme0n1 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjg5Zjg3MWM0MTUyODZjYTQ4ODc1YjljMDI1ZGEwZWFiODM1MzhkZTM0NjFiM2JikhNgLw==: 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: ]] 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ZTQwYmMxNzExY2U1ZTlmNzQ4OWFhMTZiMTcyOGFu96xD: 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.829 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.830 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.396 nvme0n1 00:29:26.396 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.396 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.396 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.396 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.396 10:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY0MTk4ZTA5NWE5NTMyM2U5ZDg5ZWRjZWRlZDcwYjYyYzM3YjQ1OGRjYjczY2I0NzdmMmIwN2M4YWFjNTE0Yk2EtbY=: 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.396 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.961 nvme0n1 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.961 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.962 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.962 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.962 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.220 request: 00:29:27.220 { 00:29:27.220 "name": "nvme0", 00:29:27.220 "trtype": "tcp", 00:29:27.220 "traddr": "10.0.0.1", 00:29:27.220 "adrfam": "ipv4", 00:29:27.220 "trsvcid": "4420", 00:29:27.220 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.220 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.220 "prchk_reftag": false, 00:29:27.220 "prchk_guard": false, 00:29:27.220 "hdgst": false, 00:29:27.220 "ddgst": false, 00:29:27.220 "allow_unrecognized_csi": false, 00:29:27.220 "method": "bdev_nvme_attach_controller", 00:29:27.220 "req_id": 1 00:29:27.220 } 00:29:27.220 Got JSON-RPC error response 00:29:27.220 response: 00:29:27.220 { 00:29:27.220 "code": -5, 00:29:27.220 "message": "Input/output error" 00:29:27.220 } 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.220 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.221 request: 00:29:27.221 { 00:29:27.221 "name": "nvme0", 00:29:27.221 "trtype": "tcp", 00:29:27.221 "traddr": "10.0.0.1", 00:29:27.221 "adrfam": "ipv4", 00:29:27.221 "trsvcid": "4420", 00:29:27.221 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.221 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.221 "prchk_reftag": false, 00:29:27.221 "prchk_guard": false, 00:29:27.221 "hdgst": false, 00:29:27.221 "ddgst": false, 00:29:27.221 "dhchap_key": "key2", 00:29:27.221 "allow_unrecognized_csi": false, 00:29:27.221 "method": "bdev_nvme_attach_controller", 00:29:27.221 "req_id": 1 00:29:27.221 } 00:29:27.221 Got JSON-RPC error response 00:29:27.221 response: 00:29:27.221 { 00:29:27.221 "code": -5, 00:29:27.221 "message": "Input/output error" 00:29:27.221 } 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.221 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.479 10:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.479 request: 00:29:27.479 { 00:29:27.479 "name": "nvme0", 00:29:27.479 "trtype": "tcp", 00:29:27.479 "traddr": "10.0.0.1", 00:29:27.480 "adrfam": "ipv4", 00:29:27.480 "trsvcid": "4420", 00:29:27.480 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.480 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.480 "prchk_reftag": false, 00:29:27.480 "prchk_guard": false, 00:29:27.480 "hdgst": false, 00:29:27.480 "ddgst": false, 00:29:27.480 "dhchap_key": "key1", 00:29:27.480 "dhchap_ctrlr_key": "ckey2", 00:29:27.480 "allow_unrecognized_csi": false, 00:29:27.480 "method": "bdev_nvme_attach_controller", 00:29:27.480 "req_id": 1 00:29:27.480 } 00:29:27.480 Got JSON-RPC error response 00:29:27.480 response: 00:29:27.480 { 00:29:27.480 "code": -5, 00:29:27.480 "message": "Input/output error" 00:29:27.480 } 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.480 nvme0n1 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.480 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.738 request: 00:29:27.738 { 00:29:27.738 "name": "nvme0", 00:29:27.738 "dhchap_key": "key1", 00:29:27.738 "dhchap_ctrlr_key": "ckey2", 00:29:27.738 "method": "bdev_nvme_set_keys", 00:29:27.738 "req_id": 1 00:29:27.738 } 00:29:27.738 Got JSON-RPC error response 00:29:27.738 response: 00:29:27.738 { 00:29:27.738 "code": -13, 00:29:27.738 "message": "Permission denied" 00:29:27.738 } 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:27.738 10:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:29.112 10:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJlMjk2OTg5ODRkODQxNGI5YjEyNjdhYjM1NjYyMTFhNDRkODhiZjgzODExYTVlmAhbbw==: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWY2MGJiNzMwZmUxZWI5ZmJkYzE0MDUxNTUxN2E4YTlmZjEzZDJmMzY1ZDBmOGIxzXSk6g==: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.044 nvme0n1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDY2MjM2NmU4NTNhZjg1YzAyZjg4NmY3MTFhYzAzZGSCbvdo: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: ]] 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTk0Mzg0YmZmOTgyN2M5OGFkYjYxZDVkZDU4ZjcwYTXR4vi3: 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:30.044 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.045 request: 00:29:30.045 { 00:29:30.045 "name": "nvme0", 00:29:30.045 "dhchap_key": "key2", 00:29:30.045 "dhchap_ctrlr_key": "ckey1", 00:29:30.045 "method": "bdev_nvme_set_keys", 00:29:30.045 "req_id": 1 00:29:30.045 } 00:29:30.045 Got JSON-RPC error response 00:29:30.045 response: 00:29:30.045 { 00:29:30.045 "code": -13, 00:29:30.045 "message": "Permission denied" 00:29:30.045 } 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:30.045 10:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.418 rmmod nvme_tcp 00:29:31.418 rmmod nvme_fabrics 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2792586 ']' 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2792586 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2792586 ']' 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2792586 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2792586 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2792586' 00:29:31.418 killing process with pid 2792586 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2792586 00:29:31.418 10:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2792586 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.418 10:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:33.950 10:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:36.490 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:36.490 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.869 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:37.870 10:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jWr /tmp/spdk.key-null.HyR /tmp/spdk.key-sha256.rxP /tmp/spdk.key-sha384.tTM /tmp/spdk.key-sha512.NUN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:37.870 10:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:41.240 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:41.240 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:41.240 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:41.240 00:29:41.240 real 0m54.527s 00:29:41.240 user 0m48.734s 00:29:41.240 sys 0m12.714s 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.240 ************************************ 00:29:41.240 END TEST nvmf_auth_host 00:29:41.240 ************************************ 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.240 ************************************ 00:29:41.240 START TEST nvmf_digest 00:29:41.240 ************************************ 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:41.240 * Looking for test storage... 00:29:41.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.240 --rc genhtml_branch_coverage=1 00:29:41.240 --rc genhtml_function_coverage=1 00:29:41.240 --rc genhtml_legend=1 00:29:41.240 --rc geninfo_all_blocks=1 00:29:41.240 --rc geninfo_unexecuted_blocks=1 00:29:41.240 00:29:41.240 ' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.240 --rc genhtml_branch_coverage=1 00:29:41.240 --rc genhtml_function_coverage=1 00:29:41.240 --rc genhtml_legend=1 00:29:41.240 --rc geninfo_all_blocks=1 00:29:41.240 --rc geninfo_unexecuted_blocks=1 00:29:41.240 00:29:41.240 ' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.240 --rc genhtml_branch_coverage=1 00:29:41.240 --rc genhtml_function_coverage=1 00:29:41.240 --rc genhtml_legend=1 00:29:41.240 --rc geninfo_all_blocks=1 00:29:41.240 --rc geninfo_unexecuted_blocks=1 00:29:41.240 00:29:41.240 ' 00:29:41.240 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.241 --rc genhtml_branch_coverage=1 00:29:41.241 --rc genhtml_function_coverage=1 00:29:41.241 --rc genhtml_legend=1 00:29:41.241 --rc geninfo_all_blocks=1 00:29:41.241 --rc geninfo_unexecuted_blocks=1 00:29:41.241 00:29:41.241 ' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.241 10:40:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:47.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:47.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:47.810 Found net devices under 0000:86:00.0: cvl_0_0 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.810 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:47.811 Found net devices under 0000:86:00.1: cvl_0_1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:29:47.811 00:29:47.811 --- 10.0.0.2 ping statistics --- 00:29:47.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.811 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:29:47.811 00:29:47.811 --- 10.0.0.1 ping statistics --- 00:29:47.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.811 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 ************************************ 00:29:47.811 START TEST nvmf_digest_clean 00:29:47.811 ************************************ 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2806359 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2806359 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2806359 ']' 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 [2024-12-09 10:40:24.790639] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:29:47.811 [2024-12-09 10:40:24.790679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.811 [2024-12-09 10:40:24.870455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.811 [2024-12-09 10:40:24.911126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.811 [2024-12-09 10:40:24.911161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.811 [2024-12-09 10:40:24.911168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.811 [2024-12-09 10:40:24.911174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.811 [2024-12-09 10:40:24.911179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.811 [2024-12-09 10:40:24.911746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.811 10:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.811 null0 00:29:47.811 [2024-12-09 10:40:25.055619] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.811 [2024-12-09 10:40:25.079812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2806380 00:29:47.811 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2806380 /var/tmp/bperf.sock 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2806380 ']' 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:47.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.812 [2024-12-09 10:40:25.131102] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:29:47.812 [2024-12-09 10:40:25.131141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806380 ] 00:29:47.812 [2024-12-09 10:40:25.205369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.812 [2024-12-09 10:40:25.247624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:47.812 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:48.070 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.070 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.327 nvme0n1 00:29:48.327 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:48.327 10:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:48.327 Running I/O for 2 seconds... 00:29:50.252 25323.00 IOPS, 98.92 MiB/s [2024-12-09T09:40:27.976Z] 25410.00 IOPS, 99.26 MiB/s 00:29:50.252 Latency(us) 00:29:50.252 [2024-12-09T09:40:27.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:50.252 nvme0n1 : 2.00 25435.56 99.36 0.00 0.00 5027.53 2543.42 17850.76 00:29:50.252 [2024-12-09T09:40:27.976Z] =================================================================================================================== 00:29:50.252 [2024-12-09T09:40:27.976Z] Total : 25435.56 99.36 0.00 0.00 5027.53 2543.42 17850.76 00:29:50.252 { 00:29:50.252 "results": [ 00:29:50.252 { 00:29:50.252 "job": "nvme0n1", 00:29:50.252 "core_mask": "0x2", 00:29:50.252 "workload": "randread", 00:29:50.252 "status": "finished", 00:29:50.252 "queue_depth": 128, 00:29:50.252 "io_size": 4096, 00:29:50.252 "runtime": 2.004202, 00:29:50.252 "iops": 25435.559888673895, 00:29:50.252 "mibps": 99.3576558151324, 00:29:50.252 "io_failed": 0, 00:29:50.252 "io_timeout": 0, 00:29:50.252 "avg_latency_us": 5027.532858730843, 00:29:50.252 "min_latency_us": 2543.4209523809523, 00:29:50.252 "max_latency_us": 17850.758095238096 00:29:50.252 } 00:29:50.252 ], 00:29:50.252 "core_count": 1 00:29:50.252 } 00:29:50.252 10:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:50.252 10:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:50.252 10:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:50.252 10:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:50.252 | select(.opcode=="crc32c") 00:29:50.252 | "\(.module_name) \(.executed)"' 00:29:50.252 10:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2806380 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2806380 ']' 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2806380 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806380 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806380' 00:29:50.510 killing process with pid 2806380 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2806380 00:29:50.510 Received shutdown signal, test time was about 2.000000 seconds 00:29:50.510 00:29:50.510 Latency(us) 00:29:50.510 [2024-12-09T09:40:28.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.510 [2024-12-09T09:40:28.234Z] =================================================================================================================== 00:29:50.510 [2024-12-09T09:40:28.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.510 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2806380 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2806856 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2806856 /var/tmp/bperf.sock 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2806856 ']' 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:50.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.769 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:50.769 [2024-12-09 10:40:28.432126] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:29:50.769 [2024-12-09 10:40:28.432178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806856 ] 00:29:50.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:50.769 Zero copy mechanism will not be used. 00:29:51.026 [2024-12-09 10:40:28.511115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.026 [2024-12-09 10:40:28.549223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.026 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.026 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:51.026 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:51.026 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:51.026 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:51.284 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.284 10:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.849 nvme0n1 00:29:51.849 10:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:51.850 10:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:51.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:51.850 Zero copy mechanism will not be used. 00:29:51.850 Running I/O for 2 seconds... 00:29:53.722 5351.00 IOPS, 668.88 MiB/s [2024-12-09T09:40:31.446Z] 5828.50 IOPS, 728.56 MiB/s 00:29:53.722 Latency(us) 00:29:53.722 [2024-12-09T09:40:31.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.722 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:53.722 nvme0n1 : 2.00 5830.58 728.82 0.00 0.00 2741.55 643.66 7895.53 00:29:53.722 [2024-12-09T09:40:31.446Z] =================================================================================================================== 00:29:53.722 [2024-12-09T09:40:31.446Z] Total : 5830.58 728.82 0.00 0.00 2741.55 643.66 7895.53 00:29:53.722 { 00:29:53.722 "results": [ 00:29:53.722 { 00:29:53.722 "job": "nvme0n1", 00:29:53.722 "core_mask": "0x2", 00:29:53.722 "workload": "randread", 00:29:53.722 "status": "finished", 00:29:53.722 "queue_depth": 16, 00:29:53.722 "io_size": 131072, 00:29:53.722 "runtime": 2.002031, 00:29:53.722 "iops": 5830.579046977794, 00:29:53.722 "mibps": 728.8223808722242, 00:29:53.722 "io_failed": 0, 00:29:53.722 "io_timeout": 0, 00:29:53.722 "avg_latency_us": 2741.5517708346083, 00:29:53.722 "min_latency_us": 643.6571428571428, 00:29:53.722 "max_latency_us": 7895.527619047619 00:29:53.722 } 00:29:53.722 ], 00:29:53.722 "core_count": 1 00:29:53.722 } 00:29:53.722 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:53.722 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:53.981 | select(.opcode=="crc32c") 00:29:53.981 | "\(.module_name) \(.executed)"' 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2806856 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2806856 ']' 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2806856 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.981 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806856 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806856' 00:29:54.262 killing process with pid 2806856 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2806856 00:29:54.262 Received shutdown signal, test time was about 2.000000 seconds 00:29:54.262 00:29:54.262 Latency(us) 00:29:54.262 [2024-12-09T09:40:31.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.262 [2024-12-09T09:40:31.986Z] =================================================================================================================== 00:29:54.262 [2024-12-09T09:40:31.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2806856 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2807558 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2807558 /var/tmp/bperf.sock 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2807558 ']' 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:54.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.262 10:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:54.262 [2024-12-09 10:40:31.917656] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:29:54.262 [2024-12-09 10:40:31.917702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807558 ] 00:29:54.521 [2024-12-09 10:40:31.991824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.521 [2024-12-09 10:40:32.031995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.521 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.521 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:54.521 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:54.521 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:54.521 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:54.780 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.780 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.039 nvme0n1 00:29:55.039 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:55.039 10:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:55.039 Running I/O for 2 seconds... 00:29:57.350 28586.00 IOPS, 111.66 MiB/s [2024-12-09T09:40:35.074Z] 28726.50 IOPS, 112.21 MiB/s 00:29:57.350 Latency(us) 00:29:57.350 [2024-12-09T09:40:35.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.350 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.350 nvme0n1 : 2.01 28743.82 112.28 0.00 0.00 4448.89 1794.44 14105.84 00:29:57.350 [2024-12-09T09:40:35.074Z] =================================================================================================================== 00:29:57.350 [2024-12-09T09:40:35.074Z] Total : 28743.82 112.28 0.00 0.00 4448.89 1794.44 14105.84 00:29:57.350 { 00:29:57.350 "results": [ 00:29:57.350 { 00:29:57.350 "job": "nvme0n1", 00:29:57.350 "core_mask": "0x2", 00:29:57.350 "workload": "randwrite", 00:29:57.350 "status": "finished", 00:29:57.350 "queue_depth": 128, 00:29:57.350 "io_size": 4096, 00:29:57.350 "runtime": 2.007666, 00:29:57.350 "iops": 28743.824919085146, 00:29:57.350 "mibps": 112.28056609017635, 00:29:57.350 "io_failed": 0, 00:29:57.350 "io_timeout": 0, 00:29:57.350 "avg_latency_us": 4448.889096964355, 00:29:57.351 "min_latency_us": 1794.4380952380952, 00:29:57.351 "max_latency_us": 14105.843809523809 00:29:57.351 } 00:29:57.351 ], 00:29:57.351 "core_count": 1 00:29:57.351 } 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:57.351 | select(.opcode=="crc32c") 00:29:57.351 | "\(.module_name) \(.executed)"' 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2807558 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2807558 ']' 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2807558 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.351 10:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2807558 00:29:57.351 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:57.351 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:57.351 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2807558' 00:29:57.351 killing process with pid 2807558 00:29:57.351 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2807558 00:29:57.351 Received shutdown signal, test time was about 2.000000 seconds 00:29:57.351 00:29:57.351 Latency(us) 00:29:57.351 [2024-12-09T09:40:35.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.351 [2024-12-09T09:40:35.075Z] =================================================================================================================== 00:29:57.351 [2024-12-09T09:40:35.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.351 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2807558 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2808032 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2808032 /var/tmp/bperf.sock 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2808032 ']' 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:57.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.609 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:57.609 [2024-12-09 10:40:35.250487] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:29:57.609 [2024-12-09 10:40:35.250532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808032 ] 00:29:57.609 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:57.609 Zero copy mechanism will not be used. 00:29:57.609 [2024-12-09 10:40:35.325340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.866 [2024-12-09 10:40:35.367402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.866 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.866 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:57.866 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:57.866 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:57.866 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:58.124 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.124 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.382 nvme0n1 00:29:58.382 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:58.382 10:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:58.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:58.382 Zero copy mechanism will not be used. 00:29:58.382 Running I/O for 2 seconds... 00:30:00.688 6393.00 IOPS, 799.12 MiB/s [2024-12-09T09:40:38.412Z] 6990.00 IOPS, 873.75 MiB/s 00:30:00.688 Latency(us) 00:30:00.688 [2024-12-09T09:40:38.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.688 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:00.688 nvme0n1 : 2.00 6988.01 873.50 0.00 0.00 2285.66 1443.35 9861.61 00:30:00.688 [2024-12-09T09:40:38.412Z] =================================================================================================================== 00:30:00.688 [2024-12-09T09:40:38.412Z] Total : 6988.01 873.50 0.00 0.00 2285.66 1443.35 9861.61 00:30:00.688 { 00:30:00.688 "results": [ 00:30:00.688 { 00:30:00.688 "job": "nvme0n1", 00:30:00.688 "core_mask": "0x2", 00:30:00.688 "workload": "randwrite", 00:30:00.688 "status": "finished", 00:30:00.688 "queue_depth": 16, 00:30:00.688 "io_size": 131072, 00:30:00.688 "runtime": 2.003433, 00:30:00.688 "iops": 6988.005089264278, 00:30:00.688 "mibps": 873.5006361580347, 00:30:00.688 "io_failed": 0, 00:30:00.688 "io_timeout": 0, 00:30:00.688 "avg_latency_us": 2285.6585578231293, 00:30:00.688 "min_latency_us": 1443.352380952381, 00:30:00.688 "max_latency_us": 9861.60761904762 00:30:00.688 } 00:30:00.688 ], 00:30:00.688 "core_count": 1 00:30:00.688 } 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:00.688 | select(.opcode=="crc32c") 00:30:00.688 | "\(.module_name) \(.executed)"' 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2808032 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2808032 ']' 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2808032 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808032 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808032' 00:30:00.688 killing process with pid 2808032 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2808032 00:30:00.688 Received shutdown signal, test time was about 2.000000 seconds 00:30:00.688 00:30:00.688 Latency(us) 00:30:00.688 [2024-12-09T09:40:38.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.688 [2024-12-09T09:40:38.412Z] =================================================================================================================== 00:30:00.688 [2024-12-09T09:40:38.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.688 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2808032 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2806359 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2806359 ']' 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2806359 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806359 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806359' 00:30:00.947 killing process with pid 2806359 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2806359 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2806359 00:30:00.947 00:30:00.947 real 0m13.892s 00:30:00.947 user 0m26.460s 00:30:00.947 sys 0m4.648s 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:00.947 ************************************ 00:30:00.947 END TEST nvmf_digest_clean 00:30:00.947 ************************************ 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.947 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.205 ************************************ 00:30:01.205 START TEST nvmf_digest_error 00:30:01.205 ************************************ 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2808641 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2808641 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2808641 ']' 00:30:01.205 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.206 [2024-12-09 10:40:38.758918] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:01.206 [2024-12-09 10:40:38.758959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.206 [2024-12-09 10:40:38.836088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.206 [2024-12-09 10:40:38.876440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.206 [2024-12-09 10:40:38.876477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.206 [2024-12-09 10:40:38.876484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.206 [2024-12-09 10:40:38.876490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.206 [2024-12-09 10:40:38.876495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.206 [2024-12-09 10:40:38.877037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.206 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.464 [2024-12-09 10:40:38.945480] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.464 10:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.464 null0 00:30:01.464 [2024-12-09 10:40:39.041672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.464 [2024-12-09 10:40:39.065863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2808769 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2808769 /var/tmp/bperf.sock 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2808769 ']' 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.464 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.464 [2024-12-09 10:40:39.121975] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:01.464 [2024-12-09 10:40:39.122015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808769 ] 00:30:01.723 [2024-12-09 10:40:39.197581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.723 [2024-12-09 10:40:39.239909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.723 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.723 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:01.723 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.723 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.982 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.240 nvme0n1 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:02.240 10:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.499 Running I/O for 2 seconds... 00:30:02.499 [2024-12-09 10:40:40.039241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.039279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.039292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.048289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.048314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.048324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.059901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.059923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.059932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.071437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.071459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.071467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.081850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.081873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.081884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.092483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.092511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.092520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.101993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.102014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.102023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.112139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.112169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.122250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.122270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.122279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.131876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.131898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.131906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.499 [2024-12-09 10:40:40.140048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.499 [2024-12-09 10:40:40.140068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.499 [2024-12-09 10:40:40.140076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.152981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.153002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.153011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.164864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.164886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.164895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.175198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.175218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.175226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.186180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.186200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.186209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.194752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.194781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.204244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.204264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.204272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.500 [2024-12-09 10:40:40.214203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.500 [2024-12-09 10:40:40.214224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.500 [2024-12-09 10:40:40.214232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.223067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.223088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.223097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.232432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.232454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.232462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.241864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.241884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.241892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.251911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.251932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.263183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.263203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.263215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.273367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.273387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.759 [2024-12-09 10:40:40.273395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.759 [2024-12-09 10:40:40.282734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.759 [2024-12-09 10:40:40.282755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.282763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.292090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.292111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.292119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.301509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.301530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.301538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.311631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.311652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.311660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.320708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.320729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.320737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.329490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.329534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.340203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.340223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.340231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.350559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.350582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.350591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.359907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.359927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.359934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.369941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.369960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.369968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.382118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.382137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.382145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.391426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.391446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.391454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.402927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.402948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.402956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.415487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.415508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.415517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.426415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.426435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.426443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.439123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.439143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.439152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.448930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.448957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.458337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.458356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.458364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.467805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.467830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.467838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.760 [2024-12-09 10:40:40.478709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:02.760 [2024-12-09 10:40:40.478728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.760 [2024-12-09 10:40:40.478736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.486401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.486420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.497859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.497880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.497888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.510118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.510142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.521305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.521333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.535383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.535409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.535421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.548040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.548060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.548067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.556244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.556263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.556271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.568473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.568491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.020 [2024-12-09 10:40:40.568499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.020 [2024-12-09 10:40:40.579717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.020 [2024-12-09 10:40:40.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.579745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.588830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.588851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.588859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.601738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.601759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.601768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.615096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.615117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.615125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.627724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.627744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.627752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.636290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.636313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.636321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.648520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.648539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.648547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.658073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.658092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.658100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.666583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.666602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.666610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.678641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.678661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.678668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.688906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.688933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.698577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.698597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.698621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.707329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.707356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.718011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.718030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.718038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.729725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.729744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.729753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.021 [2024-12-09 10:40:40.740989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.021 [2024-12-09 10:40:40.741008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.021 [2024-12-09 10:40:40.741016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.749652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.749671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.749679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.760686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.760706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.773223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.773245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.773253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.785465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.785485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.785493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.797614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.797634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.797642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.808742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.808762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.808770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.817785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.817814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.817823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.829891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.829911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.838232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.838252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.838260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.850489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.850509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.850517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.861070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.861090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.861098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.873397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.873417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.873425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.881313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.881333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.881341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.892970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.892990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.892999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.903480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.903515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.903524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.913837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.913857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.913865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.922519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.922539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.922547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.933333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.933354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.933362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.942229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.942250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.281 [2024-12-09 10:40:40.942258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.281 [2024-12-09 10:40:40.954080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.281 [2024-12-09 10:40:40.954102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:40.954110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.282 [2024-12-09 10:40:40.962051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.282 [2024-12-09 10:40:40.962071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:40.962079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.282 [2024-12-09 10:40:40.972630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.282 [2024-12-09 10:40:40.972650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:40.972658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.282 [2024-12-09 10:40:40.982011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.282 [2024-12-09 10:40:40.982032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:40.982040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.282 [2024-12-09 10:40:40.990410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.282 [2024-12-09 10:40:40.990430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:40.990441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.282 [2024-12-09 10:40:41.000206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.282 [2024-12-09 10:40:41.000228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.282 [2024-12-09 10:40:41.000236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.010636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.010665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.018884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.018905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.018912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 24247.00 IOPS, 94.71 MiB/s [2024-12-09T09:40:41.266Z] [2024-12-09 10:40:41.032542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.032562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.032570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.043839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.043860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.043868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.052796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.052821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.052830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.063122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.063142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.063151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.073897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.073918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.073926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.083628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.083653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.083662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.091922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.091943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.091952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.103067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.103088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.103096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.115926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.115946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.115954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.128256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.128277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.128285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.140139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.140159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.152792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.152818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.152827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.163612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.163632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.163641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.172114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.172144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.184528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.184548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.184557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.195438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.195459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.195468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.203982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.204003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.204011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.214794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.214819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.214827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.227401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.227421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.227430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.238929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.238950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.238958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.249150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.249173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.249182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.542 [2024-12-09 10:40:41.257525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.542 [2024-12-09 10:40:41.257546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.542 [2024-12-09 10:40:41.257554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.270372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.270393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.270405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.282606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.282626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.294924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.294944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.294952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.305635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.305656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.305665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.314456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.314477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.314485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.325948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.325971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.325979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.336518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.336540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.336548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.344402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.344423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.344431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.354576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.354597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.354605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.363894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.363916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.363924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.373423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.373451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.384136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.384158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.384166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.392802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.392831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.392840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.401615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.401636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.802 [2024-12-09 10:40:41.401644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.802 [2024-12-09 10:40:41.411142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.802 [2024-12-09 10:40:41.411164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.411173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.421144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.421165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.421173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.430317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.430338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.430346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.439475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.439495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.439506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.448161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.448182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.448190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.458210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.458231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.458239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.466997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.467017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.475971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.475992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.476001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.485259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.485287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.494271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.494291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.494300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.502865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.502886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.502895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.515372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.515394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.515402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:03.803 [2024-12-09 10:40:41.523599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:03.803 [2024-12-09 10:40:41.523624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.803 [2024-12-09 10:40:41.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.534562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.534584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.543898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.543919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.543927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.552758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.552779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.552787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.562385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.562405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.562413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.572577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.572596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.581018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.581038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.581047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.590773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.590794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.598960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.598981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.598990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.610414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.610436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.610444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.621517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.621537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.621545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.629718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.629740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.629748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.642424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.642445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.642453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.650721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.650742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.650750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.660867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.660887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.660896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.670537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.670566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.679039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.679060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.679069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.063 [2024-12-09 10:40:41.687883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.063 [2024-12-09 10:40:41.687905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.063 [2024-12-09 10:40:41.687917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.697886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.697907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.697915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.706746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.706767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.706775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.715941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.715963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.715970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.724593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.724614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.724622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.733971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.733992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.734000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.743103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.743123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.743132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.752241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.752262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.761815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.761835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.761843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.770646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.064 [2024-12-09 10:40:41.780169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.064 [2024-12-09 10:40:41.780189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.064 [2024-12-09 10:40:41.780197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.789410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.789431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.800024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.800046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.800054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.808139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.808165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.808177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.819065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.819086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.819095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.830993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.831014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.831022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.842468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.842488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.842496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.851238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.851258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.862714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.862734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.862742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.873792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.873819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.873828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.884017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.884037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.884045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.892641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.892661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.892669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.904084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.904104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.904112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.913669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.913690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.913698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.324 [2024-12-09 10:40:41.925099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.324 [2024-12-09 10:40:41.925120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.324 [2024-12-09 10:40:41.925128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.933303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.933324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.933332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.945881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.945914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.958005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.958025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.958033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.969378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.969398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.969406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.977927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.977948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.977957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:41.990426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:41.990446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:41.990455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:42.001535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:42.001557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:42.001566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:42.012444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:42.012465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:42.012473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 [2024-12-09 10:40:42.020967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefd2e0) 00:30:04.325 [2024-12-09 10:40:42.020987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.325 [2024-12-09 10:40:42.020996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:04.325 24767.50 IOPS, 96.75 MiB/s 00:30:04.325 Latency(us) 00:30:04.325 [2024-12-09T09:40:42.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:04.325 nvme0n1 : 2.00 24790.65 96.84 0.00 0.00 5158.38 2512.21 17725.93 00:30:04.325 [2024-12-09T09:40:42.049Z] =================================================================================================================== 00:30:04.325 [2024-12-09T09:40:42.049Z] Total : 24790.65 96.84 0.00 0.00 5158.38 2512.21 17725.93 00:30:04.325 { 00:30:04.325 "results": [ 00:30:04.325 { 00:30:04.325 "job": "nvme0n1", 00:30:04.325 "core_mask": "0x2", 00:30:04.325 "workload": "randread", 00:30:04.325 "status": "finished", 00:30:04.325 "queue_depth": 128, 00:30:04.325 "io_size": 4096, 00:30:04.325 "runtime": 2.003296, 00:30:04.325 "iops": 24790.645017011964, 00:30:04.325 "mibps": 96.83845709770299, 00:30:04.325 "io_failed": 0, 00:30:04.325 "io_timeout": 0, 00:30:04.325 "avg_latency_us": 5158.38117536961, 00:30:04.325 "min_latency_us": 2512.213333333333, 00:30:04.325 "max_latency_us": 17725.92761904762 00:30:04.325 } 00:30:04.325 ], 00:30:04.325 "core_count": 1 00:30:04.325 } 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:04.585 | .driver_specific 00:30:04.585 | .nvme_error 00:30:04.585 | .status_code 00:30:04.585 | .command_transient_transport_error' 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2808769 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2808769 ']' 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2808769 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.585 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808769 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808769' 00:30:04.844 killing process with pid 2808769 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2808769 00:30:04.844 Received shutdown signal, test time was about 2.000000 seconds 00:30:04.844 00:30:04.844 Latency(us) 00:30:04.844 [2024-12-09T09:40:42.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.844 [2024-12-09T09:40:42.568Z] =================================================================================================================== 00:30:04.844 [2024-12-09T09:40:42.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2808769 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2809250 00:30:04.844 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2809250 /var/tmp/bperf.sock 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2809250 ']' 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:04.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.845 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:04.845 [2024-12-09 10:40:42.512389] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:04.845 [2024-12-09 10:40:42.512436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809250 ] 00:30:04.845 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:04.845 Zero copy mechanism will not be used. 00:30:05.104 [2024-12-09 10:40:42.586495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.104 [2024-12-09 10:40:42.626276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.104 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.104 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:05.104 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:05.104 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:05.363 10:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:05.622 nvme0n1 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:05.622 10:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:05.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.622 Zero copy mechanism will not be used. 00:30:05.622 Running I/O for 2 seconds... 00:30:05.622 [2024-12-09 10:40:43.338736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.622 [2024-12-09 10:40:43.338771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.622 [2024-12-09 10:40:43.338781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.622 [2024-12-09 10:40:43.344722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.622 [2024-12-09 10:40:43.344750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.622 [2024-12-09 10:40:43.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.350694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.350718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.350726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.356154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.356177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.356185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.361600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.361621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.366907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.366929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.372269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.372292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.372300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.377616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.377638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.377646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.382938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.382960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.881 [2024-12-09 10:40:43.382972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.881 [2024-12-09 10:40:43.388348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.881 [2024-12-09 10:40:43.388371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.388379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.393856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.393887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.399391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.399414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.399423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.404706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.404728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.404736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.410084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.410107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.410115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.415546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.415569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.420868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.420891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.420899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.426163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.426186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.426195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.431547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.431573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.437079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.437101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.437109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.442706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.442729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.442737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.448242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.448265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.448273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.453753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.453776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.453784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.458988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.459010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.459019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.464177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.464209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.464218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.469419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.469441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.469450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.474800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.474829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.480020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.480043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.480051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.485225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.485248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.490436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.490466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.495649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.495672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.495680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.500877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.500898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.500907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.506012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.506033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.506042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.511224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.511254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.516447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.516469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.516477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.521583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.521606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.526757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.526780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.882 [2024-12-09 10:40:43.526789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.882 [2024-12-09 10:40:43.532011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.882 [2024-12-09 10:40:43.532033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.537248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.537269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.537277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.542483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.542505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.542512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.547674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.547696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.547704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.552867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.552889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.552900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.558049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.558071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.558079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.563262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.563292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.568490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.568512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.568520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.573693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.573714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.573722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.578868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.578889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.578897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.584083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.584105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.584113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.589268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.589289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.589297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.594542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.594564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.594572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.883 [2024-12-09 10:40:43.599826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:05.883 [2024-12-09 10:40:43.599848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.883 [2024-12-09 10:40:43.599857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.605006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.605028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.605037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.610346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.610370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.610381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.615571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.620723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.620745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.620754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.625914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.625937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.625945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.631130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.631151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.631159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.636315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.636337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.636345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.641459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.641481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.641489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.646672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.646694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.646702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.651817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.651839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.651847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.657015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.657040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.657049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.662156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.144 [2024-12-09 10:40:43.662178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.144 [2024-12-09 10:40:43.662186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.144 [2024-12-09 10:40:43.667282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.667304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.667312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.672483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.672505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.672513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.677698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.677719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.677727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.682912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.682934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.682942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.688166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.688187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.688195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.693343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.693364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.693372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.698473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.698495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.698504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.703675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.703697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.703705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.708875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.708897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.708905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.714045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.714068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.714076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.719140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.719161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.719170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.721908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.721929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.721937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.727165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.727187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.727195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.732116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.732153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.737382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.737404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.737412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.742280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.742302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.742314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.747309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.747330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.747338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.752299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.752320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.752328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.757311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.757333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.757342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.762491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.762513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.762521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.767673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.772852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.772873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.777996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.778018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.783220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.783241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.145 [2024-12-09 10:40:43.783249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.145 [2024-12-09 10:40:43.788330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.145 [2024-12-09 10:40:43.788354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.788362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.793471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.793493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.798650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.798671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.798679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.803777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.803798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.803806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.808942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.808964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.808972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.814077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.814098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.814107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.819217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.819239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.819247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.824410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.824432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.824440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.829526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.829547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.829558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.834688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.834710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.834718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.839836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.839858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.839866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.845123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.845145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.845154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.850407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.850430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.850438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.855744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.855766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.855774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.146 [2024-12-09 10:40:43.860961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.146 [2024-12-09 10:40:43.860983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.146 [2024-12-09 10:40:43.860992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.866204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.866227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.866236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.871413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.871433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.871442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.876628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.876653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.876661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.881814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.881836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.886956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.886978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.886986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.892048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.892069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.892077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.897206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.897228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.897236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.902391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.902412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.902421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.907510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.907531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.907540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.912739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.912761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.912769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.917941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.917962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.917970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.923135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.923157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.923165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.928333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.928354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.928362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.933481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.933502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.933510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.938722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.938743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.938751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.943914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.943943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.949017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.949039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.949047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.954166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.954188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.954196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.959335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.959357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.959365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.964418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.964439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.964451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.969582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.969603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.969611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.974703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.974724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.974732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.979882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.979903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.979911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.984955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.984976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.984985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.990034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.990055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.990064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:43.995133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:43.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:43.995164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:44.000385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:44.000406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:44.000414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:44.005578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:44.005600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:44.005608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.405 [2024-12-09 10:40:44.010785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.405 [2024-12-09 10:40:44.010816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.405 [2024-12-09 10:40:44.010826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.015944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.015965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.015973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.021079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.021100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.026190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.026211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.026219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.031349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.031371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.031379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.036495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.036518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.036526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.041665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.041686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.041693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.046816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.046837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.046845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.051947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.051969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.051977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.057132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.057153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.062318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.062340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.062348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.067509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.067530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.067538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.072649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.072677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.077850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.077879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.083147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.083176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.088314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.088335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.093444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.093473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.098654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.098677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.098689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.103907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.103930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.103938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.109112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.109133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.109142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.114270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.114291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.114299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.119508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.119529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.119537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.406 [2024-12-09 10:40:44.124687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.406 [2024-12-09 10:40:44.124709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.406 [2024-12-09 10:40:44.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.129940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.129962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.129970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.135131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.135153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.135161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.140308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.140330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.140338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.145563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.145584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.145592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.150757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.150786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.155919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.155940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.155949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.161053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.161074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.161082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.166209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.166231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.166238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.171305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.176430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.176452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.176460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.181571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.181593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.181601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.186762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.186783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.186796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.191924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.191946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.191954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.197073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.197095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.197103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.202207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.202228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.202236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.207371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.207400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.212515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.212537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.212545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.217686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.217708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.222860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.222883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.222890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.228061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.228082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.228091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.664 [2024-12-09 10:40:44.233260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.664 [2024-12-09 10:40:44.233285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.664 [2024-12-09 10:40:44.233293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.238202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.238225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.238233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.243378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.243401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.243409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.248559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.248582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.248590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.254115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.254137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.254146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.259885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.259907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.259915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.265974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.265998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.266007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.273394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.273418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.273426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.280563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.280587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.280595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.287899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.287923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.287932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.296075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.296099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.296107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.302364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.302387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.302395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.307526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.307549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.312994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.313015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.313023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.318618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.318641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.324074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.324096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.324104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.329892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.329914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.335352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.335374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.335387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 5873.00 IOPS, 734.12 MiB/s [2024-12-09T09:40:44.389Z] [2024-12-09 10:40:44.341902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.341925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.341934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.347271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.347293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.347301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.352583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.352605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.352614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.357905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.357935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.363200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.363222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.363229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.368558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.373833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.373855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.373862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.379152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.379175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.379184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.665 [2024-12-09 10:40:44.384653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.665 [2024-12-09 10:40:44.384676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.665 [2024-12-09 10:40:44.384684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.390060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.390084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.390092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.395332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.395355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.395363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.400639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.400670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.405921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.405943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.405952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.411272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.411295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.411303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.416679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.416701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.416709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.940 [2024-12-09 10:40:44.420361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.940 [2024-12-09 10:40:44.420382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.940 [2024-12-09 10:40:44.420391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.424814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.424837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.424849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.429654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.429684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.435132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.435155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.435163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.440547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.445936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.445959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.445967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.451208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.451230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.451238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.457563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.457587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.457598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.464258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.464280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.464289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.469851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.469873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.469881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.475278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.475305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.475313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.480908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.480931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.480939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.486528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.486550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.486559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.491399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.491422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.491431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.496726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.496748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.496757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.502232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.502256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.507600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.507631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.512919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.512942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.512951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.518386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.518408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.518417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.523875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.523897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.523905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.529325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.529349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.529357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.534748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.534772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.540455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.540477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.540485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.545813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.545837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.545846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.551130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.551161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.556349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.556371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.556379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.561584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.561605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.561614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.567117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.567140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.567151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.572456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.572478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.572486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.577528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.577550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.577558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.580298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.941 [2024-12-09 10:40:44.580320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.941 [2024-12-09 10:40:44.580329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.941 [2024-12-09 10:40:44.585553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.585575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.585584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.590753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.590774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.590782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.595998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.596020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.596028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.601286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.601307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.601315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.606482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.606504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.606512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.611826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.611852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.611861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.617155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.617176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.617183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.622389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.622410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.622418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.627757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.627787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.633009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.633031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.633039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.638481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.638502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.638510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.643775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.643797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.643805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.649152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.649174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.649182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.654716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.654737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.654745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.942 [2024-12-09 10:40:44.660107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:06.942 [2024-12-09 10:40:44.660129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.942 [2024-12-09 10:40:44.660137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.665146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.665168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.665176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.670651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.670673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.670681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.675702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.675723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.675731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.680903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.680924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.680932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.687052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.687075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.687083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.693856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.693879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.693887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.700635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.700657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.708158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.708184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.708193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.716057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.716079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.716088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.723992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.724015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.724023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.731799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.731830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.731840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.739562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.739584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.739593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.747217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.747239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.747248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.755119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.755141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.755150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.762870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.762893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.762902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.771167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.771189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.771198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.779608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.779631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.779639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.788002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.788025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.788033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.795786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.795823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.803531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.803554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.803563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.811877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.811900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.811909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.818496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.818517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.818526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.823952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.823974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.203 [2024-12-09 10:40:44.823982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.203 [2024-12-09 10:40:44.829357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.203 [2024-12-09 10:40:44.829378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.829386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.834776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.834797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.834815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.840790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.840818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.840827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.846194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.846216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.846224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.851720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.851742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.851750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.856731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.856753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.856762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.862040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.862061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.862070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.867601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.867623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.867631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.873132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.873155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.878346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.878368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.878376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.883540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.883567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.883576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.888904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.888926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.892409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.892431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.892440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.896851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.896872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.896881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.902035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.902057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.902065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.907333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.907355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.907363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.912659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.912680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.912688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.917982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.918011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.204 [2024-12-09 10:40:44.923462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.204 [2024-12-09 10:40:44.923484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.204 [2024-12-09 10:40:44.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.929026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.929049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.929057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.934876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.934898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.940360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.940382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.940390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.945794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.945822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.945830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.951234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.951256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.951264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.956631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.956660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.961960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.961981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.961989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.967294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.967315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.967323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.972451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.972473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.972484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.977848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.977869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.977877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.983287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.983309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.983317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.989132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.989161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.994683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.994705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.994713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:44.999889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:44.999910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:44.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.005761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.005782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:45.005790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.011007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.011029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:45.011038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.016367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.016389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:45.016397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.021725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:45.021758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.027149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.027170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.464 [2024-12-09 10:40:45.027178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.464 [2024-12-09 10:40:45.032591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.464 [2024-12-09 10:40:45.032612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.032620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.038054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.038076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.038084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.043412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.043434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.043442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.048746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.048768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.048777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.054052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.054073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.059461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.059482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.059490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.064773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.064794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.070061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.070083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.070090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.075435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.075456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.075464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.080780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.080802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.080816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.086290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.086311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.086318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.091637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.091667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.097121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.097143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.097152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.102546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.102568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.102576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.107970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.107992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.108000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.113459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.113484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.113492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.118861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.118883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.124145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.124166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.124174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.129579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.129600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.129608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.134823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.134844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.134852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.140087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.140110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.140118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.145404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.145426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.145433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.150806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.150832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.150840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.156317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.156337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.156345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.161624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.161645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.161655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.167070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.167091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.167100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.172452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.172474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.465 [2024-12-09 10:40:45.172482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.465 [2024-12-09 10:40:45.177816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.465 [2024-12-09 10:40:45.177837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.466 [2024-12-09 10:40:45.177845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.466 [2024-12-09 10:40:45.183086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.466 [2024-12-09 10:40:45.183109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.466 [2024-12-09 10:40:45.183117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.188351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.188372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.193668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.193690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.193698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.198783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.198804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.198818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.203880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.203902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.203914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.209222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.209251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.214736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.214759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.220350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.220371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.225919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.225942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.225950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.231242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.231263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.231271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.236716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.236738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.236746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.242006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.242027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.247374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.247395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.247403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.252692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.252718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.252726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.258078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.258099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.258107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.263508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.263531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.263539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.268678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.268700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.268708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.274118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.724 [2024-12-09 10:40:45.274140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.724 [2024-12-09 10:40:45.274148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.724 [2024-12-09 10:40:45.278797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.278824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.278833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.284084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.284106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.284114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.289280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.289300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.289308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.294518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.294541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.294552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.299896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.299918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.299926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.305275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.305297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.305305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.310573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.310596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.310604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.315630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.315652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.315660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.320909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.320930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.320938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.326549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.326572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.326581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.331884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.331906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.331914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:07.725 [2024-12-09 10:40:45.337596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228bdd0) 00:30:07.725 [2024-12-09 10:40:45.337618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.725 [2024-12-09 10:40:45.337627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:07.725 5724.50 IOPS, 715.56 MiB/s 00:30:07.725 Latency(us) 00:30:07.725 [2024-12-09T09:40:45.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.725 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:07.725 nvme0n1 : 2.00 5727.31 715.91 0.00 0.00 2791.06 639.76 8550.89 00:30:07.725 [2024-12-09T09:40:45.449Z] =================================================================================================================== 00:30:07.725 [2024-12-09T09:40:45.449Z] Total : 5727.31 715.91 0.00 0.00 2791.06 639.76 8550.89 00:30:07.725 { 00:30:07.725 "results": [ 00:30:07.725 { 00:30:07.725 "job": "nvme0n1", 00:30:07.725 "core_mask": "0x2", 00:30:07.725 "workload": "randread", 00:30:07.725 "status": "finished", 00:30:07.725 "queue_depth": 16, 00:30:07.725 "io_size": 131072, 00:30:07.725 "runtime": 2.001813, 00:30:07.725 "iops": 5727.3081951211225, 00:30:07.725 "mibps": 715.9135243901403, 00:30:07.725 "io_failed": 0, 00:30:07.725 "io_timeout": 0, 00:30:07.725 "avg_latency_us": 2791.06252819139, 00:30:07.725 "min_latency_us": 639.7561904761905, 00:30:07.725 "max_latency_us": 8550.887619047619 00:30:07.725 } 00:30:07.725 ], 00:30:07.725 "core_count": 1 00:30:07.725 } 00:30:07.725 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:07.725 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:07.725 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:07.725 | .driver_specific 00:30:07.725 | .nvme_error 00:30:07.725 | .status_code 00:30:07.725 | .command_transient_transport_error' 00:30:07.725 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2809250 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2809250 ']' 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2809250 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809250 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809250' 00:30:07.983 killing process with pid 2809250 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2809250 00:30:07.983 Received shutdown signal, test time was about 2.000000 seconds 00:30:07.983 00:30:07.983 Latency(us) 00:30:07.983 [2024-12-09T09:40:45.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.983 [2024-12-09T09:40:45.707Z] =================================================================================================================== 00:30:07.983 [2024-12-09T09:40:45.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.983 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2809250 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2809774 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2809774 /var/tmp/bperf.sock 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2809774 ']' 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:08.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.242 10:40:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.242 [2024-12-09 10:40:45.826619] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:08.242 [2024-12-09 10:40:45.826672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809774 ] 00:30:08.242 [2024-12-09 10:40:45.905154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.242 [2024-12-09 10:40:45.945898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.500 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.500 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:08.500 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:08.500 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:08.758 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.017 nvme0n1 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:09.017 10:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.277 Running I/O for 2 seconds... 00:30:09.277 [2024-12-09 10:40:46.785974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeea00 00:30:09.277 [2024-12-09 10:40:46.786868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.786900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.794593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee7c50 00:30:09.277 [2024-12-09 10:40:46.795452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.803963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5658 00:30:09.277 [2024-12-09 10:40:46.804967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.804990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.812259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eea680 00:30:09.277 [2024-12-09 10:40:46.812899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.821306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efdeb0 00:30:09.277 [2024-12-09 10:40:46.821737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.831517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0ff8 00:30:09.277 [2024-12-09 10:40:46.832720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.832741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.839795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eff3c8 00:30:09.277 [2024-12-09 10:40:46.840687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.840707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.848816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee27f0 00:30:09.277 [2024-12-09 10:40:46.849485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.849505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.857237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed4e8 00:30:09.277 [2024-12-09 10:40:46.858439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.865507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee27f0 00:30:09.277 [2024-12-09 10:40:46.866169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.866188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.874384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efd208 00:30:09.277 [2024-12-09 10:40:46.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.875069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.883283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efc128 00:30:09.277 [2024-12-09 10:40:46.883940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.883959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.892154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eebb98 00:30:09.277 [2024-12-09 10:40:46.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.892839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.901044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6300 00:30:09.277 [2024-12-09 10:40:46.901678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.901699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.909914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef20d8 00:30:09.277 [2024-12-09 10:40:46.910572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.918786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef31b8 00:30:09.277 [2024-12-09 10:40:46.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.919463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.927632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed920 00:30:09.277 [2024-12-09 10:40:46.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.936498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eec840 00:30:09.277 [2024-12-09 10:40:46.937161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.937181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.945383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0350 00:30:09.277 [2024-12-09 10:40:46.946045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.946064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.955389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eef270 00:30:09.277 [2024-12-09 10:40:46.956491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.956509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:09.277 [2024-12-09 10:40:46.963647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efda78 00:30:09.277 [2024-12-09 10:40:46.964406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.277 [2024-12-09 10:40:46.964424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.278 [2024-12-09 10:40:46.972387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8e88 00:30:09.278 [2024-12-09 10:40:46.973146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.278 [2024-12-09 10:40:46.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.278 [2024-12-09 10:40:46.981297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee8088 00:30:09.278 [2024-12-09 10:40:46.982060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.278 [2024-12-09 10:40:46.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.278 [2024-12-09 10:40:46.990223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6fa8 00:30:09.278 [2024-12-09 10:40:46.990986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.278 [2024-12-09 10:40:46.991004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.278 [2024-12-09 10:40:46.999309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efb048 00:30:09.538 [2024-12-09 10:40:47.000091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.000110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.008342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef9f68 00:30:09.538 [2024-12-09 10:40:47.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.009132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.017232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efe2e8 00:30:09.538 [2024-12-09 10:40:47.017997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.018016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.026106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeea00 00:30:09.538 [2024-12-09 10:40:47.026864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.026882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.034990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eefae0 00:30:09.538 [2024-12-09 10:40:47.035777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.035795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.043264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eea680 00:30:09.538 [2024-12-09 10:40:47.044032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.044051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.053424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee38d0 00:30:09.538 [2024-12-09 10:40:47.054305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.054325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.062592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efa3a0 00:30:09.538 [2024-12-09 10:40:47.063548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.063567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.071030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eddc00 00:30:09.538 [2024-12-09 10:40:47.072016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.072035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.080954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee88f8 00:30:09.538 [2024-12-09 10:40:47.082051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.082071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.090109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4578 00:30:09.538 [2024-12-09 10:40:47.091244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.091268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.097474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efe720 00:30:09.538 [2024-12-09 10:40:47.098130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.098150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.105622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee99d8 00:30:09.538 [2024-12-09 10:40:47.106261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.106279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.114449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeee38 00:30:09.538 [2024-12-09 10:40:47.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.115112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.123806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef1ca0 00:30:09.538 [2024-12-09 10:40:47.124441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.124461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.132544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:09.538 [2024-12-09 10:40:47.133179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.141572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2d80 00:30:09.538 [2024-12-09 10:40:47.142207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.142227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.150225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef5378 00:30:09.538 [2024-12-09 10:40:47.150931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.150950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.161140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4140 00:30:09.538 [2024-12-09 10:40:47.162295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.162315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:09.538 [2024-12-09 10:40:47.169002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeee38 00:30:09.538 [2024-12-09 10:40:47.169497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.538 [2024-12-09 10:40:47.169516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.179761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efb048 00:30:09.539 [2024-12-09 10:40:47.181179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.181198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.189071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eff3c8 00:30:09.539 [2024-12-09 10:40:47.190588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.190607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.195477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee1f80 00:30:09.539 [2024-12-09 10:40:47.196321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.196340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.206236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2948 00:30:09.539 [2024-12-09 10:40:47.207204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.207224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.214948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed920 00:30:09.539 [2024-12-09 10:40:47.216012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.224030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8e88 00:30:09.539 [2024-12-09 10:40:47.225077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.225098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.234066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee8088 00:30:09.539 [2024-12-09 10:40:47.235586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.235606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.240439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee0ea0 00:30:09.539 [2024-12-09 10:40:47.241158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.241177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.249478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efc998 00:30:09.539 [2024-12-09 10:40:47.250209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.250228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:09.539 [2024-12-09 10:40:47.258537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edfdc0 00:30:09.539 [2024-12-09 10:40:47.259297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.539 [2024-12-09 10:40:47.259317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.267663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef31b8 00:30:09.798 [2024-12-09 10:40:47.268386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.275972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6fa8 00:30:09.798 [2024-12-09 10:40:47.276670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.276689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.285861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee49b0 00:30:09.798 [2024-12-09 10:40:47.286693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.286713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.294143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef5378 00:30:09.798 [2024-12-09 10:40:47.294987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.295007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.305376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb760 00:30:09.798 [2024-12-09 10:40:47.306730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.306750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.314429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeaab8 00:30:09.798 [2024-12-09 10:40:47.315791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.315815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.322140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee1710 00:30:09.798 [2024-12-09 10:40:47.323050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.323075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.330958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6cc8 00:30:09.798 [2024-12-09 10:40:47.331857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.331877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.339867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef1430 00:30:09.798 [2024-12-09 10:40:47.340749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.340768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.350914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeaab8 00:30:09.798 [2024-12-09 10:40:47.352404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.352424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.357194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef4298 00:30:09.798 [2024-12-09 10:40:47.357940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.798 [2024-12-09 10:40:47.357959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:09.798 [2024-12-09 10:40:47.366049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0bc0 00:30:09.799 [2024-12-09 10:40:47.366806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.366830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.376550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eefae0 00:30:09.799 [2024-12-09 10:40:47.377558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.377578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.385706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efcdd0 00:30:09.799 [2024-12-09 10:40:47.386935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.386955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.393979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edf550 00:30:09.799 [2024-12-09 10:40:47.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.395068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.402362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee27f0 00:30:09.799 [2024-12-09 10:40:47.403319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.403342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.410784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee73e0 00:30:09.799 [2024-12-09 10:40:47.411683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.411704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.420519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee7818 00:30:09.799 [2024-12-09 10:40:47.421537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.421558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.429401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeee38 00:30:09.799 [2024-12-09 10:40:47.430336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.430355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.437713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee95a0 00:30:09.799 [2024-12-09 10:40:47.438910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.438929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.447049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efa7d8 00:30:09.799 [2024-12-09 10:40:47.448357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.448377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.454654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6cc8 00:30:09.799 [2024-12-09 10:40:47.455313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.455333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.465313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efd640 00:30:09.799 [2024-12-09 10:40:47.466330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.466349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.472677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef4b08 00:30:09.799 [2024-12-09 10:40:47.473221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.473240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.481410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef4b08 00:30:09.799 [2024-12-09 10:40:47.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.492128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:09.799 [2024-12-09 10:40:47.493154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.493173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.500330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:09.799 [2024-12-09 10:40:47.501327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.501346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.509758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:09.799 [2024-12-09 10:40:47.510770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.510789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:09.799 [2024-12-09 10:40:47.518716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:09.799 [2024-12-09 10:40:47.519867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:09.799 [2024-12-09 10:40:47.519887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.527777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:10.057 [2024-12-09 10:40:47.528788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.528812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.536956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef5378 00:30:10.057 [2024-12-09 10:40:47.538193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.546245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6458 00:30:10.057 [2024-12-09 10:40:47.547642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.547661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.555739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efda78 00:30:10.057 [2024-12-09 10:40:47.557280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.557299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.562268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0350 00:30:10.057 [2024-12-09 10:40:47.562992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.563011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.571653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7100 00:30:10.057 [2024-12-09 10:40:47.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.572564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.580640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eecc78 00:30:10.057 [2024-12-09 10:40:47.581506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.581525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.589502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef1430 00:30:10.057 [2024-12-09 10:40:47.590362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.590382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.598394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb760 00:30:10.057 [2024-12-09 10:40:47.599286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.599305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.606674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee49b0 00:30:10.057 [2024-12-09 10:40:47.607502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.607521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.616577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efa7d8 00:30:10.057 [2024-12-09 10:40:47.617546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.617566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.625457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef96f8 00:30:10.057 [2024-12-09 10:40:47.626433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.626452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.634603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef5be8 00:30:10.057 [2024-12-09 10:40:47.635607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.635630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.643648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee38d0 00:30:10.057 [2024-12-09 10:40:47.644736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.057 [2024-12-09 10:40:47.644755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:10.057 [2024-12-09 10:40:47.652800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6cc8 00:30:10.057 [2024-12-09 10:40:47.653990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.654008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.660057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efe720 00:30:10.058 [2024-12-09 10:40:47.660793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.660816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.668931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee9168 00:30:10.058 [2024-12-09 10:40:47.669667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.669686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.677159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efeb58 00:30:10.058 [2024-12-09 10:40:47.677887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.687052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0350 00:30:10.058 [2024-12-09 10:40:47.687946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.687965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.695943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efbcf0 00:30:10.058 [2024-12-09 10:40:47.696810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.696829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.704793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee1b48 00:30:10.058 [2024-12-09 10:40:47.705637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.705656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.713643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efda78 00:30:10.058 [2024-12-09 10:40:47.714512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.714531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.722513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5658 00:30:10.058 [2024-12-09 10:40:47.723370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.723389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.730788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5220 00:30:10.058 [2024-12-09 10:40:47.731650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.731669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.740729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efef90 00:30:10.058 [2024-12-09 10:40:47.741726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.741746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.749022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eebfd0 00:30:10.058 [2024-12-09 10:40:47.749988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.750007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.758972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eff3c8 00:30:10.058 [2024-12-09 10:40:47.760103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.760122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:10.058 [2024-12-09 10:40:47.767972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6fa8 00:30:10.058 [2024-12-09 10:40:47.769083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.769103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:10.058 28404.00 IOPS, 110.95 MiB/s [2024-12-09T09:40:47.782Z] [2024-12-09 10:40:47.776876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2d80 00:30:10.058 [2024-12-09 10:40:47.778014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.058 [2024-12-09 10:40:47.778034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.785374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4578 00:30:10.317 [2024-12-09 10:40:47.786440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.786460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.795148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edf118 00:30:10.317 [2024-12-09 10:40:47.796345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.796369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.802401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee0ea0 00:30:10.317 [2024-12-09 10:40:47.803060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.803080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.810876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efcdd0 00:30:10.317 [2024-12-09 10:40:47.811594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.811613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.819883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4578 00:30:10.317 [2024-12-09 10:40:47.820615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.820634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.828583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef92c0 00:30:10.317 [2024-12-09 10:40:47.829292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.829311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.837593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee0ea0 00:30:10.317 [2024-12-09 10:40:47.838338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.838358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.846796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6cc8 00:30:10.317 [2024-12-09 10:40:47.847523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.847543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.856063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef96f8 00:30:10.317 [2024-12-09 10:40:47.857026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.857045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.865090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee49b0 00:30:10.317 [2024-12-09 10:40:47.865593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.865615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.876249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eee190 00:30:10.317 [2024-12-09 10:40:47.877819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.877837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.882682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7da8 00:30:10.317 [2024-12-09 10:40:47.883533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.883551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.891957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eea680 00:30:10.317 [2024-12-09 10:40:47.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.892951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.902885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef9f68 00:30:10.317 [2024-12-09 10:40:47.904342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.904361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.909152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0ff8 00:30:10.317 [2024-12-09 10:40:47.909840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.920548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef81e0 00:30:10.317 [2024-12-09 10:40:47.922018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.922037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:10.317 [2024-12-09 10:40:47.926823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8618 00:30:10.317 [2024-12-09 10:40:47.927456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.317 [2024-12-09 10:40:47.927475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.935653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edf118 00:30:10.318 [2024-12-09 10:40:47.936388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.936407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.944746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb328 00:30:10.318 [2024-12-09 10:40:47.945502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.945520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.953422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5658 00:30:10.318 [2024-12-09 10:40:47.954150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.962701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeea00 00:30:10.318 [2024-12-09 10:40:47.963475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.963494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.971993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4de8 00:30:10.318 [2024-12-09 10:40:47.972879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.972897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.981264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efd208 00:30:10.318 [2024-12-09 10:40:47.982372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.982391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.989769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb328 00:30:10.318 [2024-12-09 10:40:47.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:47.998823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8a50 00:30:10.318 [2024-12-09 10:40:47.999686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:47.999704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:48.008182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7da8 00:30:10.318 [2024-12-09 10:40:48.009175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:48.009195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:48.017460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efa7d8 00:30:10.318 [2024-12-09 10:40:48.018580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:48.018600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:48.026592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2d80 00:30:10.318 [2024-12-09 10:40:48.027273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:48.027292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:10.318 [2024-12-09 10:40:48.035435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee88f8 00:30:10.318 [2024-12-09 10:40:48.036452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.318 [2024-12-09 10:40:48.036473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.044609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef9b30 00:30:10.577 [2024-12-09 10:40:48.045532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.045552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.053049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efac10 00:30:10.577 [2024-12-09 10:40:48.053743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.053763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.062796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee23b8 00:30:10.577 [2024-12-09 10:40:48.063797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.063819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.071789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee1b48 00:30:10.577 [2024-12-09 10:40:48.072812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.072831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.080830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0bc0 00:30:10.577 [2024-12-09 10:40:48.081812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.081831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.089829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ede038 00:30:10.577 [2024-12-09 10:40:48.090388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.090407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.099798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef81e0 00:30:10.577 [2024-12-09 10:40:48.101037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.101060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.108116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee88f8 00:30:10.577 [2024-12-09 10:40:48.109240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.109258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.117242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee0a68 00:30:10.577 [2024-12-09 10:40:48.118370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.118389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.125638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eec408 00:30:10.577 [2024-12-09 10:40:48.126762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.126782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.134643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edf988 00:30:10.577 [2024-12-09 10:40:48.135784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.135803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.143300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6cc8 00:30:10.577 [2024-12-09 10:40:48.144420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.152313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeff18 00:30:10.577 [2024-12-09 10:40:48.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.153443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.160963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee01f8 00:30:10.577 [2024-12-09 10:40:48.161630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.161650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.171168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed4e8 00:30:10.577 [2024-12-09 10:40:48.172636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.172655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.177440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7da8 00:30:10.577 [2024-12-09 10:40:48.178092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.577 [2024-12-09 10:40:48.178114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:10.577 [2024-12-09 10:40:48.188891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef3a28 00:30:10.578 [2024-12-09 10:40:48.190372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.190390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.195160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee7c50 00:30:10.578 [2024-12-09 10:40:48.195845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.195864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.203596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed0b0 00:30:10.578 [2024-12-09 10:40:48.204231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.204250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.212901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee88f8 00:30:10.578 [2024-12-09 10:40:48.213660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.213680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.222190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee8088 00:30:10.578 [2024-12-09 10:40:48.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.223245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.231631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef57b0 00:30:10.578 [2024-12-09 10:40:48.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.232684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.242284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2510 00:30:10.578 [2024-12-09 10:40:48.243773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.243792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.248565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eed0b0 00:30:10.578 [2024-12-09 10:40:48.249204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.249224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.256987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8618 00:30:10.578 [2024-12-09 10:40:48.257621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.257639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.268023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4140 00:30:10.578 [2024-12-09 10:40:48.269152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.269171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.277119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb760 00:30:10.578 [2024-12-09 10:40:48.277791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.277815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.285493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee3060 00:30:10.578 [2024-12-09 10:40:48.286105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.286124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:10.578 [2024-12-09 10:40:48.294851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efda78 00:30:10.578 [2024-12-09 10:40:48.295543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.578 [2024-12-09 10:40:48.295562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.303455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6300 00:30:10.837 [2024-12-09 10:40:48.304718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.311256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef2948 00:30:10.837 [2024-12-09 10:40:48.311891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.322297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efb8b8 00:30:10.837 [2024-12-09 10:40:48.323524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.323551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.331584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee0ea0 00:30:10.837 [2024-12-09 10:40:48.332930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.332948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.340868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee4140 00:30:10.837 [2024-12-09 10:40:48.342356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.342374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.347152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5ec8 00:30:10.837 [2024-12-09 10:40:48.347821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.347840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.355564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef92c0 00:30:10.837 [2024-12-09 10:40:48.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.366498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee7818 00:30:10.837 [2024-12-09 10:40:48.367620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.367639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.374345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee6300 00:30:10.837 [2024-12-09 10:40:48.374783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.837 [2024-12-09 10:40:48.374802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:10.837 [2024-12-09 10:40:48.383574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efa7d8 00:30:10.838 [2024-12-09 10:40:48.384471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.384489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.392561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5a90 00:30:10.838 [2024-12-09 10:40:48.393027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.393046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.403874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eedd58 00:30:10.838 [2024-12-09 10:40:48.405377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.405396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.410306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee1f80 00:30:10.838 [2024-12-09 10:40:48.411109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.411131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.421329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eec408 00:30:10.838 [2024-12-09 10:40:48.422621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.422641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.429714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eee190 00:30:10.838 [2024-12-09 10:40:48.430724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.430744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.438663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eecc78 00:30:10.838 [2024-12-09 10:40:48.439745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.447979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efdeb0 00:30:10.838 [2024-12-09 10:40:48.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.449178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.457252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7da8 00:30:10.838 [2024-12-09 10:40:48.458525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.458544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.465481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeb760 00:30:10.838 [2024-12-09 10:40:48.466338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.466357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.473644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef46d0 00:30:10.838 [2024-12-09 10:40:48.474601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.474619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.482633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efef90 00:30:10.838 [2024-12-09 10:40:48.483130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.483150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.493915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee12d8 00:30:10.838 [2024-12-09 10:40:48.495486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.495505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.500452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efb480 00:30:10.838 [2024-12-09 10:40:48.501314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.501334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.509459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ede8a8 00:30:10.838 [2024-12-09 10:40:48.510317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.510337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.518117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef3a28 00:30:10.838 [2024-12-09 10:40:48.518945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.518964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.527402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5ec8 00:30:10.838 [2024-12-09 10:40:48.528408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.528427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.536298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eedd58 00:30:10.838 [2024-12-09 10:40:48.536939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.536959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.544433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee3d08 00:30:10.838 [2024-12-09 10:40:48.545059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.545078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:10.838 [2024-12-09 10:40:48.554984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee3d08 00:30:10.838 [2024-12-09 10:40:48.556123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.838 [2024-12-09 10:40:48.556141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.562901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ede8a8 00:30:11.097 [2024-12-09 10:40:48.563539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.563560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.571910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ede038 00:30:11.097 [2024-12-09 10:40:48.572525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.572544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.580915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efd208 00:30:11.097 [2024-12-09 10:40:48.581524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.581543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.589310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0350 00:30:11.097 [2024-12-09 10:40:48.589999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.590019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.598843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eefae0 00:30:11.097 [2024-12-09 10:40:48.599554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.599573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.608124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeea00 00:30:11.097 [2024-12-09 10:40:48.609050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.609071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.617275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eedd58 00:30:11.097 [2024-12-09 10:40:48.617760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.617780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.626568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef0788 00:30:11.097 [2024-12-09 10:40:48.627176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.627196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.635055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeff18 00:30:11.097 [2024-12-09 10:40:48.635573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.635604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.646196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef8618 00:30:11.097 [2024-12-09 10:40:48.647672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.647694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.652588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef7da8 00:30:11.097 [2024-12-09 10:40:48.653400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.653419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.663464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efac10 00:30:11.097 [2024-12-09 10:40:48.664675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.664695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.670123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016efb048 00:30:11.097 [2024-12-09 10:40:48.670841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.670860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.680896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee5a90 00:30:11.097 [2024-12-09 10:40:48.681883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.681902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.690337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee9e10 00:30:11.097 [2024-12-09 10:40:48.691634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.699621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef6020 00:30:11.097 [2024-12-09 10:40:48.701035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.701055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.707485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef35f0 00:30:11.097 [2024-12-09 10:40:48.708243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.708263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.716275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee88f8 00:30:11.097 [2024-12-09 10:40:48.717343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.717362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.725101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef1868 00:30:11.097 [2024-12-09 10:40:48.726047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.726067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.733509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ee9168 00:30:11.097 [2024-12-09 10:40:48.734471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.734490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:11.097 [2024-12-09 10:40:48.742740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eee5c8 00:30:11.097 [2024-12-09 10:40:48.743276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-12-09 10:40:48.743296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:11.098 [2024-12-09 10:40:48.751720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016eeff18 00:30:11.098 [2024-12-09 10:40:48.752493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.098 [2024-12-09 10:40:48.752512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:11.098 [2024-12-09 10:40:48.761475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016edf988 00:30:11.098 [2024-12-09 10:40:48.762599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.098 [2024-12-09 10:40:48.762618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:11.098 [2024-12-09 10:40:48.770175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef4f40 00:30:11.098 [2024-12-09 10:40:48.771288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.098 [2024-12-09 10:40:48.771307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:11.098 28490.00 IOPS, 111.29 MiB/s [2024-12-09T09:40:48.822Z] [2024-12-09 10:40:48.777941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf44d90) with pdu=0x200016ef20d8 00:30:11.098 [2024-12-09 10:40:48.778559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.098 [2024-12-09 10:40:48.778577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:11.098 00:30:11.098 Latency(us) 00:30:11.098 [2024-12-09T09:40:48.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.098 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.098 nvme0n1 : 2.01 28488.24 111.28 0.00 0.00 4487.15 1771.03 13668.94 00:30:11.098 [2024-12-09T09:40:48.822Z] =================================================================================================================== 00:30:11.098 [2024-12-09T09:40:48.822Z] Total : 28488.24 111.28 0.00 0.00 4487.15 1771.03 13668.94 00:30:11.098 { 00:30:11.098 "results": [ 00:30:11.098 { 00:30:11.098 "job": "nvme0n1", 00:30:11.098 "core_mask": "0x2", 00:30:11.098 "workload": "randwrite", 00:30:11.098 "status": "finished", 00:30:11.098 "queue_depth": 128, 00:30:11.098 "io_size": 4096, 00:30:11.098 "runtime": 2.006863, 00:30:11.098 "iops": 28488.242595533426, 00:30:11.098 "mibps": 111.28219763880244, 00:30:11.098 "io_failed": 0, 00:30:11.098 "io_timeout": 0, 00:30:11.098 "avg_latency_us": 4487.152559128178, 00:30:11.098 "min_latency_us": 1771.032380952381, 00:30:11.098 "max_latency_us": 13668.937142857143 00:30:11.098 } 00:30:11.098 ], 00:30:11.098 "core_count": 1 00:30:11.098 } 00:30:11.098 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:11.098 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:11.098 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:11.098 | .driver_specific 00:30:11.098 | .nvme_error 00:30:11.098 | .status_code 00:30:11.098 | .command_transient_transport_error' 00:30:11.098 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:11.356 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:30:11.356 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2809774 00:30:11.356 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2809774 ']' 00:30:11.356 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2809774 00:30:11.356 10:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809774 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809774' 00:30:11.356 killing process with pid 2809774 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2809774 00:30:11.356 Received shutdown signal, test time was about 2.000000 seconds 00:30:11.356 00:30:11.356 Latency(us) 00:30:11.356 [2024-12-09T09:40:49.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.356 [2024-12-09T09:40:49.080Z] =================================================================================================================== 00:30:11.356 [2024-12-09T09:40:49.080Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:11.356 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2809774 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2810412 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2810412 /var/tmp/bperf.sock 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2810412 ']' 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:11.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.614 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:11.614 [2024-12-09 10:40:49.256314] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:11.614 [2024-12-09 10:40:49.256371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810412 ] 00:30:11.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:11.614 Zero copy mechanism will not be used. 00:30:11.614 [2024-12-09 10:40:49.332479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.873 [2024-12-09 10:40:49.369700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.873 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.873 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:11.873 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:11.873 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:12.131 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:12.389 nvme0n1 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:12.389 10:40:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:12.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.389 Zero copy mechanism will not be used. 00:30:12.389 Running I/O for 2 seconds... 00:30:12.389 [2024-12-09 10:40:50.084924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.085008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.389 [2024-12-09 10:40:50.089657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.089721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.089744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.389 [2024-12-09 10:40:50.094120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.094187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.094208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.389 [2024-12-09 10:40:50.098541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.098626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.389 [2024-12-09 10:40:50.103296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.103354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.103374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.389 [2024-12-09 10:40:50.107832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.389 [2024-12-09 10:40:50.107892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.389 [2024-12-09 10:40:50.107912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.112916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.112972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.112991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.118225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.118283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.118301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.123499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.123623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.128246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.128355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.128378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.133051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.133150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.133169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.137852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.137909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.137928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.142463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.142525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.142544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.147004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.147114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.151762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.151845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.151864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.156631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.156720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.156739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.161571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.161664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.161682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.166571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.166690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.166708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.171853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.171950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.177211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.177278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.177297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.182147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.649 [2024-12-09 10:40:50.182265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.649 [2024-12-09 10:40:50.182283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.649 [2024-12-09 10:40:50.187013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.187076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.187094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.192540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.192590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.197937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.198004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.198023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.203857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.204015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.204037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.211366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.211511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.218154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.218247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.218268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.223442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.223712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.223733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.228381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.228661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.228682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.233504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.233792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.233818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.238428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.238725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.238746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.243609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.243915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.243935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.248364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.248676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.248696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.252919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.253229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.257299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.257598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.257619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.261639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.261952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.261976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.266139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.266445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.266466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.270659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.270951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.270972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.275221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.275518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.275539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.279626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.279926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.279946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.284073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.284398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.288533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.288852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.293580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.293871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.293892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.298911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.299210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.299233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.303522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.303829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.303851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.308065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.308360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.312510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.312833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.316955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.317257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.650 [2024-12-09 10:40:50.317278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.650 [2024-12-09 10:40:50.321246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.650 [2024-12-09 10:40:50.321541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.321562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.325480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.325780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.325800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.329943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.330232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.330252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.334492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.334782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.334802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.339575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.339879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.339900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.344490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.344781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.344802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.349273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.349587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.354085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.354379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.359052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.359332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.359352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.364202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.364472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.364492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.651 [2024-12-09 10:40:50.368706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.651 [2024-12-09 10:40:50.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.651 [2024-12-09 10:40:50.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.373053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.373348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.373368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.377280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.377566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.377586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.381574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.381875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.381899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.385937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.386224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.386259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.390370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.390661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.390680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.394884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.395186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.399356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.399662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.403773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.404065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.408133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.408484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.412390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.412633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.412654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.416414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.416634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.416654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.420271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.911 [2024-12-09 10:40:50.420503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.911 [2024-12-09 10:40:50.420522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.911 [2024-12-09 10:40:50.424098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.424333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.424352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.428159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.428391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.428410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.432517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.432722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.432742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.437176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.437399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.437419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.441268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.441485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.441505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.445531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.445771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.445791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.449413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.449639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.449658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.453397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.453616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.453636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.457340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.457576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.457596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.461270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.461504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.461524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.465534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.465761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.465781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.469578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.469820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.469840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.473517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.473743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.473762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.477402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.477639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.477659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.481296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.481529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.481549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.485193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.485421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.485442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.489335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.489561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.489584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.493845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.494065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.494084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.498269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.498511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.498531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.503115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.503347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.503367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.507705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.507939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.507959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.512040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.512274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.516008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.516225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.516245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.519845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.520063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.523905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.524155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.524175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.528610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.528825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.528844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.912 [2024-12-09 10:40:50.532752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.912 [2024-12-09 10:40:50.532976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.912 [2024-12-09 10:40:50.532996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.536845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.537085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.537104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.540873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.541099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.541118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.544920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.545146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.545166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.548917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.549138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.549157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.552993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.553237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.553257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.556939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.557189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.560935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.561169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.561188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.564947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.565199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.565218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.569401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.569639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.569659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.573195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.573425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.573444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.576938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.577157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.577176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.580669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.580926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.580945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.584415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.584599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.584617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.588174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.588397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.592095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.592320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.592339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.597096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.597404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.597428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.602112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.602590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.607184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.607424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.607444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.612604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.612792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.612819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.618570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.618857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.618878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.625126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.625300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.625318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.913 [2024-12-09 10:40:50.631359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:12.913 [2024-12-09 10:40:50.631617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.913 [2024-12-09 10:40:50.631637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.173 [2024-12-09 10:40:50.637890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.173 [2024-12-09 10:40:50.638139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.173 [2024-12-09 10:40:50.638159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.173 [2024-12-09 10:40:50.644955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.173 [2024-12-09 10:40:50.645113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.173 [2024-12-09 10:40:50.645131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.173 [2024-12-09 10:40:50.651775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.173 [2024-12-09 10:40:50.652048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.173 [2024-12-09 10:40:50.652072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.173 [2024-12-09 10:40:50.658369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.173 [2024-12-09 10:40:50.658636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.173 [2024-12-09 10:40:50.658656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.173 [2024-12-09 10:40:50.665140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.665294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.665312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.671861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.672048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.672067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.678309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.678537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.683313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.683495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.683514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.687828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.688022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.688040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.692504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.692666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.692683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.697284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.697487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.697504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.701870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.702064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.702082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.706016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.706207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.706225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.710000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.710210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.713869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.714061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.714079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.717800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.718003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.718021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.721711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.721898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.721916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.725691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.725888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.725907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.729624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.729786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.729804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.733526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.733682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.733700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.737564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.737739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.737757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.741528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.741694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.741713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.745500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.745634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.745653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.749487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.749667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.749686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.753370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.753534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.753552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.757358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.757533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.757551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.761264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.761440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.765082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.765235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.769045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.769226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.769248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.773137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.773271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.773289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.174 [2024-12-09 10:40:50.777825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.174 [2024-12-09 10:40:50.777988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.174 [2024-12-09 10:40:50.778006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.782082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.782239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.782257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.786232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.786400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.786418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.791049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.791214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.795317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.795482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.795500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.799322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.799491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.799510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.803179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.803383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.806982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.807146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.807164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.810704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.810870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.810889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.815023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.815170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.815188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.818980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.819132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.819150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.822864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.823020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.823037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.826719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.826889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.826906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.830540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.830730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.830747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.834380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.834543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.834561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.838192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.838360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.838378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.842376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.842529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.842547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.847140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.847305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.847323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.851462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.851627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.851645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.855479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.855642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.855660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.859493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.859656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.859674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.863249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.863411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.863428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.867132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.867308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.867325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.871633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.871774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.871792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.876031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.876181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.876201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.879983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.880134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.880152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.883844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.883985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.884003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.887722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.175 [2024-12-09 10:40:50.887879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.175 [2024-12-09 10:40:50.887897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.175 [2024-12-09 10:40:50.891819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.176 [2024-12-09 10:40:50.891991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.176 [2024-12-09 10:40:50.892009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.435 [2024-12-09 10:40:50.895705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.899548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.899721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.899740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.903401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.903558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.903576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.907457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.907599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.907617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.912173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.912316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.912334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.916567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.916750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.916768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.920521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.920675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.924389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.924561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.928267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.928429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.928447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.932182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.932365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.936119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.936286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.940034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.940201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.940220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.943947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.944124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.947822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.947959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.947977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.951739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.951911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.955683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.955840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.955859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.959639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.959783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.959802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.963535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.963704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.963722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.967401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.967569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.967586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.971243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.971420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.971438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.436 [2024-12-09 10:40:50.975120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.436 [2024-12-09 10:40:50.975291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.436 [2024-12-09 10:40:50.975309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.978936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:50.979103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:50.979124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.982694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:50.982862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:50.982881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.986793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:50.987015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:50.987034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.991535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:50.991625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:50.991643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.995833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:50.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:50.996028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:50.999907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.000080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.000098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.003820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.004005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.007531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.007695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.007716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.011479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.011620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.011639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.015745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.015885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.015903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.020298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.020442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.024815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.024927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.024945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.029324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.029483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.029501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.033414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.033582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.033600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.037236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.037381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.037399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.041086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.041254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.041273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.044982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.045145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.045163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.048875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.049027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.049045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.052648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.052823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.052841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.056540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.056695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.056713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.060500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.437 [2024-12-09 10:40:51.060648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.437 [2024-12-09 10:40:51.060667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.437 [2024-12-09 10:40:51.064439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.064611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.068244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.068414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.072122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.072270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.072288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.075963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.076142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.076162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.079844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.080008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.080027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.438 7001.00 IOPS, 875.12 MiB/s [2024-12-09T09:40:51.162Z] [2024-12-09 10:40:51.084834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.084951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.089622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.089687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.089706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.094085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.094159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.094178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.098555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.098618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.098637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.103106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.103163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.103182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.107691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.107764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.107783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.112328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.112396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.112414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.117046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.117103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.117120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.121709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.121796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.126211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.126281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.126299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.131035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.131139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.131158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.135884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.135949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.135967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.140583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.140643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.140661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.145262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.145317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.145335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.150178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.150232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.150250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.438 [2024-12-09 10:40:51.155440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.438 [2024-12-09 10:40:51.155507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.438 [2024-12-09 10:40:51.155525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.697 [2024-12-09 10:40:51.160467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.697 [2024-12-09 10:40:51.160539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.697 [2024-12-09 10:40:51.160557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.697 [2024-12-09 10:40:51.166348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.697 [2024-12-09 10:40:51.166418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.697 [2024-12-09 10:40:51.166437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.697 [2024-12-09 10:40:51.171622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.697 [2024-12-09 10:40:51.171715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.697 [2024-12-09 10:40:51.171733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.177219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.177273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.177291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.182571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.182704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.188116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.188189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.188207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.193099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.193158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.193177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.199065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.199138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.204432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.204498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.204516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.209660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.209730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.209749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.215191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.215247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.220177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.220313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.220331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.225473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.225613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.230994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.231065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.231084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.236693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.236750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.236767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.241572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.241627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.241645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.246204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.246263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.246282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.250839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.250898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.250916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.255444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.255509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.255527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.260218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.260282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.260300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.264737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.264816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.269313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.269370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.269388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.274012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.274074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.274092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.278577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.278643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.698 [2024-12-09 10:40:51.278660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.698 [2024-12-09 10:40:51.283031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.698 [2024-12-09 10:40:51.283096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.283114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.287807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.287880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.287898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.292740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.292813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.297311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.297371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.297389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.301888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.302011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.302029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.308090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.308225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.308243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.314728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.315084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.315104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.321581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.321647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.321665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.328310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.328376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.328394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.335320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.335400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.335419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.341388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.341459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.341477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.346394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.346451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.346469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.351308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.351367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.356129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.356219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.360907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.360973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.360991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.365576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.365686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.365705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.370367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.370421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.370440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.375098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.375226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.375245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.380170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.380231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.380250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.385330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.385422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.390424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.390521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.390539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.699 [2024-12-09 10:40:51.395916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.699 [2024-12-09 10:40:51.395979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.699 [2024-12-09 10:40:51.395997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.700 [2024-12-09 10:40:51.401272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.700 [2024-12-09 10:40:51.401358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.700 [2024-12-09 10:40:51.401377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.700 [2024-12-09 10:40:51.406418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.700 [2024-12-09 10:40:51.406496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.700 [2024-12-09 10:40:51.406514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.700 [2024-12-09 10:40:51.411460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.700 [2024-12-09 10:40:51.411523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.700 [2024-12-09 10:40:51.411542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.700 [2024-12-09 10:40:51.416578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.700 [2024-12-09 10:40:51.416672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.700 [2024-12-09 10:40:51.416691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.422150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.422205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.422223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.427205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.427265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.427283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.431861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.431924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.431942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.436397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.436472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.436490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.440945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.441002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.445596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.958 [2024-12-09 10:40:51.445667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.958 [2024-12-09 10:40:51.445685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.958 [2024-12-09 10:40:51.450757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.450926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.450945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.456858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.457010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.457028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.463203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.463367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.463385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.469522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.469605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.469624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.476120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.476258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.476277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.484057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.484132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.484150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.491385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.491657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.491682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.498356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.498644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.498664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.505234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.505503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.505524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.511255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.511541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.511561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.517651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.517953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.517974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.523621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.523905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.523925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.530325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.530607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.530628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.536581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.536892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.536913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.542770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.543088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.543109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.549424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.549707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.549731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.555591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.555884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.555903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.561953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.562444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.562465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.568493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.568782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.574454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.574734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.574755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.580171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.580483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.580503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.959 [2024-12-09 10:40:51.584975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.959 [2024-12-09 10:40:51.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.959 [2024-12-09 10:40:51.585288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.590165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.590463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.590484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.594654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.595019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.600222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.600586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.600607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.605914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.606213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.606234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.610836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.611138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.611159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.615778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.616070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.616090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.620771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.621070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.625612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.625909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.625930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.630441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.630734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.630755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.634984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.635275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.635294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.640713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.641114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.641134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.647117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.647396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.647416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.653466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.653798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.653824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.659618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.659940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.659960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.664722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.665042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.665063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.671093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.671391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.671411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.960 [2024-12-09 10:40:51.677159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:13.960 [2024-12-09 10:40:51.677438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.960 [2024-12-09 10:40:51.677459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.220 [2024-12-09 10:40:51.681731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.220 [2024-12-09 10:40:51.682016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.220 [2024-12-09 10:40:51.682036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.220 [2024-12-09 10:40:51.686227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.220 [2024-12-09 10:40:51.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.220 [2024-12-09 10:40:51.686541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.220 [2024-12-09 10:40:51.690866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.220 [2024-12-09 10:40:51.691157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.220 [2024-12-09 10:40:51.691180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.220 [2024-12-09 10:40:51.695361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.220 [2024-12-09 10:40:51.695636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.220 [2024-12-09 10:40:51.695656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.220 [2024-12-09 10:40:51.699886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.220 [2024-12-09 10:40:51.700167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.700187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.704196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.704481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.708624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.708917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.708936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.713238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.713513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.713533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.718500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.718771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.718791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.723410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.723696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.723716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.728650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.728951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.728971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.733639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.733924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.738409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.738681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.738701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.743166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.747960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.748270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.752642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.752946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.752966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.757248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.757546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.757565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.762553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.762833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.762853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.767525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.767818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.767839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.772319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.772611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.777046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.777337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.781970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.782256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.782275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.786932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.787234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.787254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.791558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.791849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.791869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.796333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.796635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.800957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.801251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.801271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.805344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.805660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.809765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.810087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.813928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.814221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.814244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.818338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.818633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.818653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.822856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.221 [2024-12-09 10:40:51.823141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.221 [2024-12-09 10:40:51.823161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.221 [2024-12-09 10:40:51.827472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.827782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.827801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.832062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.832356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.832376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.836741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.837045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.837076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.841971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.842280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.842299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.847982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.848268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.854634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.855019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.855039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.861627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.861987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.868977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.869301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.869321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.875862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.876191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.876211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.882802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.883086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.883105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.890114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.890462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.890483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.897249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.897639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.897659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.904297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.904653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.904673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.911991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.912343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.912363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.918861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.919256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.926391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.926664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.926684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.933396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.933685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.933705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.222 [2024-12-09 10:40:51.940468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.222 [2024-12-09 10:40:51.940844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.222 [2024-12-09 10:40:51.940864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.947750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.948101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.948121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.954877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.955241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.955261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.961910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.962285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.962306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.968557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.968838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.968858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.973174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.973472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.973491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.977732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.978017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.978040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.982019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.982314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.986232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.986517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.986537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.990550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.990845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.990865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.994741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.995039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.995059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:51.998970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.482 [2024-12-09 10:40:51.999256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.482 [2024-12-09 10:40:51.999276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.482 [2024-12-09 10:40:52.003211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.003496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.003516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.007397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.007713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.011476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.011737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.011757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.015460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.015683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.015702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.019249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.019489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.019509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.023024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.023250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.023270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.026775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.027020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.027039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.030563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.030769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.030789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.034408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.034614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.034634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.038180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.038407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.041983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.042181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.042201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.045741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.045948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.045968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.049466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.049662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.049680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.053216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.053407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.053425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.056936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.057120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.057139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.060659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.060879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.060898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.064390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.064604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.068101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.068298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.068319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.071798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.072009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.072026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.075514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.075719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.079393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.079586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.079608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.483 [2024-12-09 10:40:52.083126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.083309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.083327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.483 6488.00 IOPS, 811.00 MiB/s [2024-12-09T09:40:52.207Z] [2024-12-09 10:40:52.087639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf450d0) with pdu=0x200016eff3c8 00:30:14.483 [2024-12-09 10:40:52.087704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.483 [2024-12-09 10:40:52.087722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.483 00:30:14.483 Latency(us) 00:30:14.483 [2024-12-09T09:40:52.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.483 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:14.483 nvme0n1 : 2.00 6487.96 810.99 0.00 0.00 2461.93 1341.93 7708.28 00:30:14.483 [2024-12-09T09:40:52.207Z] =================================================================================================================== 00:30:14.483 [2024-12-09T09:40:52.207Z] Total : 6487.96 810.99 0.00 0.00 2461.93 1341.93 7708.28 00:30:14.483 { 00:30:14.483 "results": [ 00:30:14.483 { 00:30:14.483 "job": "nvme0n1", 00:30:14.483 "core_mask": "0x2", 00:30:14.483 "workload": "randwrite", 00:30:14.483 "status": "finished", 00:30:14.483 "queue_depth": 16, 00:30:14.483 "io_size": 131072, 00:30:14.483 "runtime": 2.00325, 00:30:14.483 "iops": 6487.957069761637, 00:30:14.483 "mibps": 810.9946337202047, 00:30:14.483 "io_failed": 0, 00:30:14.483 "io_timeout": 0, 00:30:14.483 "avg_latency_us": 2461.933757900175, 00:30:14.483 "min_latency_us": 1341.9276190476191, 00:30:14.483 "max_latency_us": 7708.281904761905 00:30:14.483 } 00:30:14.483 ], 00:30:14.483 "core_count": 1 00:30:14.483 } 00:30:14.483 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:14.483 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:14.484 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:14.484 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:14.484 | .driver_specific 00:30:14.484 | .nvme_error 00:30:14.484 | .status_code 00:30:14.484 | .command_transient_transport_error' 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2810412 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2810412 ']' 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2810412 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2810412 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2810412' 00:30:14.743 killing process with pid 2810412 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2810412 00:30:14.743 Received shutdown signal, test time was about 2.000000 seconds 00:30:14.743 00:30:14.743 Latency(us) 00:30:14.743 [2024-12-09T09:40:52.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.743 [2024-12-09T09:40:52.467Z] =================================================================================================================== 00:30:14.743 [2024-12-09T09:40:52.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.743 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2810412 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2808641 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2808641 ']' 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2808641 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808641 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808641' 00:30:15.002 killing process with pid 2808641 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2808641 00:30:15.002 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2808641 00:30:15.261 00:30:15.261 real 0m14.039s 00:30:15.261 user 0m26.922s 00:30:15.261 sys 0m4.539s 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.261 ************************************ 00:30:15.261 END TEST nvmf_digest_error 00:30:15.261 ************************************ 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.261 rmmod nvme_tcp 00:30:15.261 rmmod nvme_fabrics 00:30:15.261 rmmod nvme_keyring 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2808641 ']' 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2808641 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2808641 ']' 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2808641 00:30:15.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2808641) - No such process 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2808641 is not found' 00:30:15.261 Process with pid 2808641 is not found 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.261 10:40:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.797 00:30:17.797 real 0m36.395s 00:30:17.797 user 0m55.233s 00:30:17.797 sys 0m13.800s 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:17.797 ************************************ 00:30:17.797 END TEST nvmf_digest 00:30:17.797 ************************************ 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.797 ************************************ 00:30:17.797 START TEST nvmf_bdevperf 00:30:17.797 ************************************ 00:30:17.797 10:40:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:17.797 * Looking for test storage... 00:30:17.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.797 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:17.797 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:17.797 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:17.797 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:17.797 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.798 --rc genhtml_branch_coverage=1 00:30:17.798 --rc genhtml_function_coverage=1 00:30:17.798 --rc genhtml_legend=1 00:30:17.798 --rc geninfo_all_blocks=1 00:30:17.798 --rc geninfo_unexecuted_blocks=1 00:30:17.798 00:30:17.798 ' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.798 --rc genhtml_branch_coverage=1 00:30:17.798 --rc genhtml_function_coverage=1 00:30:17.798 --rc genhtml_legend=1 00:30:17.798 --rc geninfo_all_blocks=1 00:30:17.798 --rc geninfo_unexecuted_blocks=1 00:30:17.798 00:30:17.798 ' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.798 --rc genhtml_branch_coverage=1 00:30:17.798 --rc genhtml_function_coverage=1 00:30:17.798 --rc genhtml_legend=1 00:30:17.798 --rc geninfo_all_blocks=1 00:30:17.798 --rc geninfo_unexecuted_blocks=1 00:30:17.798 00:30:17.798 ' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.798 --rc genhtml_branch_coverage=1 00:30:17.798 --rc genhtml_function_coverage=1 00:30:17.798 --rc genhtml_legend=1 00:30:17.798 --rc geninfo_all_blocks=1 00:30:17.798 --rc geninfo_unexecuted_blocks=1 00:30:17.798 00:30:17.798 ' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.798 10:40:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:23.070 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:23.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:23.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:23.071 Found net devices under 0000:86:00.0: cvl_0_0 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:23.071 Found net devices under 0000:86:00.1: cvl_0_1 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.071 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.329 10:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:30:23.329 00:30:23.329 --- 10.0.0.2 ping statistics --- 00:30:23.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.329 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:30:23.329 00:30:23.329 --- 10.0.0.1 ping statistics --- 00:30:23.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.329 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.329 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2814417 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2814417 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2814417 ']' 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.587 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.587 [2024-12-09 10:41:01.109365] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:23.587 [2024-12-09 10:41:01.109405] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.587 [2024-12-09 10:41:01.188708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:23.587 [2024-12-09 10:41:01.231720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.587 [2024-12-09 10:41:01.231757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.587 [2024-12-09 10:41:01.231766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.587 [2024-12-09 10:41:01.231773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.587 [2024-12-09 10:41:01.231778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.587 [2024-12-09 10:41:01.236826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.587 [2024-12-09 10:41:01.236918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.587 [2024-12-09 10:41:01.236918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 10:41:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 [2024-12-09 10:41:02.007553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 Malloc0 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.574 [2024-12-09 10:41:02.075497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.574 { 00:30:24.574 "params": { 00:30:24.574 "name": "Nvme$subsystem", 00:30:24.574 "trtype": "$TEST_TRANSPORT", 00:30:24.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.574 "adrfam": "ipv4", 00:30:24.574 "trsvcid": "$NVMF_PORT", 00:30:24.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.574 "hdgst": ${hdgst:-false}, 00:30:24.574 "ddgst": ${ddgst:-false} 00:30:24.574 }, 00:30:24.574 "method": "bdev_nvme_attach_controller" 00:30:24.574 } 00:30:24.574 EOF 00:30:24.574 )") 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:24.574 10:41:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.574 "params": { 00:30:24.574 "name": "Nvme1", 00:30:24.574 "trtype": "tcp", 00:30:24.574 "traddr": "10.0.0.2", 00:30:24.574 "adrfam": "ipv4", 00:30:24.574 "trsvcid": "4420", 00:30:24.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.574 "hdgst": false, 00:30:24.574 "ddgst": false 00:30:24.574 }, 00:30:24.574 "method": "bdev_nvme_attach_controller" 00:30:24.574 }' 00:30:24.574 [2024-12-09 10:41:02.126829] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:24.574 [2024-12-09 10:41:02.126872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814662 ] 00:30:24.574 [2024-12-09 10:41:02.200922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.574 [2024-12-09 10:41:02.242088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.879 Running I/O for 1 seconds... 00:30:25.812 11347.00 IOPS, 44.32 MiB/s 00:30:25.812 Latency(us) 00:30:25.812 [2024-12-09T09:41:03.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.812 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.812 Verification LBA range: start 0x0 length 0x4000 00:30:25.812 Nvme1n1 : 1.01 11376.25 44.44 0.00 0.00 11208.86 2200.14 11858.90 00:30:25.812 [2024-12-09T09:41:03.536Z] =================================================================================================================== 00:30:25.812 [2024-12-09T09:41:03.536Z] Total : 11376.25 44.44 0.00 0.00 11208.86 2200.14 11858.90 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2814905 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.070 { 00:30:26.070 "params": { 00:30:26.070 "name": "Nvme$subsystem", 00:30:26.070 "trtype": "$TEST_TRANSPORT", 00:30:26.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.070 "adrfam": "ipv4", 00:30:26.070 "trsvcid": "$NVMF_PORT", 00:30:26.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.070 "hdgst": ${hdgst:-false}, 00:30:26.070 "ddgst": ${ddgst:-false} 00:30:26.070 }, 00:30:26.070 "method": "bdev_nvme_attach_controller" 00:30:26.070 } 00:30:26.070 EOF 00:30:26.070 )") 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:26.070 10:41:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:26.070 "params": { 00:30:26.070 "name": "Nvme1", 00:30:26.070 "trtype": "tcp", 00:30:26.070 "traddr": "10.0.0.2", 00:30:26.070 "adrfam": "ipv4", 00:30:26.070 "trsvcid": "4420", 00:30:26.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.070 "hdgst": false, 00:30:26.070 "ddgst": false 00:30:26.070 }, 00:30:26.070 "method": "bdev_nvme_attach_controller" 00:30:26.070 }' 00:30:26.070 [2024-12-09 10:41:03.663422] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:26.070 [2024-12-09 10:41:03.663471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814905 ] 00:30:26.070 [2024-12-09 10:41:03.739567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.070 [2024-12-09 10:41:03.777047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.636 Running I/O for 15 seconds... 00:30:28.500 11319.00 IOPS, 44.21 MiB/s [2024-12-09T09:41:06.791Z] 11346.50 IOPS, 44.32 MiB/s [2024-12-09T09:41:06.791Z] 10:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2814417 00:30:29.067 10:41:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:29.067 [2024-12-09 10:41:06.638952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.638987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-12-09 10:41:06.639333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-12-09 10:41:06.639343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.068 [2024-12-09 10:41:06.639437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.639988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.639996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.068 [2024-12-09 10:41:06.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.068 [2024-12-09 10:41:06.640099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.069 [2024-12-09 10:41:06.640419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.069 [2024-12-09 10:41:06.640694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.069 [2024-12-09 10:41:06.640702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.640989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.640998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.070 [2024-12-09 10:41:06.641142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb06410 is same with the state(6) to be set 00:30:29.070 [2024-12-09 10:41:06.641157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:29.070 [2024-12-09 10:41:06.641164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:29.070 [2024-12-09 10:41:06.641170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98976 len:8 PRP1 0x0 PRP2 0x0 00:30:29.070 [2024-12-09 10:41:06.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.070 [2024-12-09 10:41:06.641267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.070 [2024-12-09 10:41:06.641282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.070 [2024-12-09 10:41:06.641296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.070 [2024-12-09 10:41:06.641309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.070 [2024-12-09 10:41:06.641316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.070 [2024-12-09 10:41:06.644118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.070 [2024-12-09 10:41:06.644145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.070 [2024-12-09 10:41:06.644692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-12-09 10:41:06.644708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.070 [2024-12-09 10:41:06.644717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.070 [2024-12-09 10:41:06.644897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.070 [2024-12-09 10:41:06.645073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-12-09 10:41:06.645083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-12-09 10:41:06.645091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-12-09 10:41:06.645098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.657172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.657600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.657647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.657674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.658277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.658877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.658887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.658895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.658901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.669953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.670370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.670388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.670396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.670557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.670717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.670727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.670733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.670740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.682784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.683197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.683237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.683265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.683844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.684007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.684017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.684023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.684029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.695575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.695963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.695985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.695993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.696153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.696312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.696322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.696328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.696334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.708343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.708701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.708748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.708773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.709376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.709915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.709925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.709932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.709939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.721181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.721598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.721649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.721674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.722208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.722370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.722378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.722385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.722391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.734113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.734461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.734480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.734487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.734660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.734837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.734847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.734855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.734861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.746926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.747265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.747291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.747451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.747612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.747621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.071 [2024-12-09 10:41:06.747627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.071 [2024-12-09 10:41:06.747634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.071 [2024-12-09 10:41:06.759768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.071 [2024-12-09 10:41:06.760102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.071 [2024-12-09 10:41:06.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.071 [2024-12-09 10:41:06.760127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.071 [2024-12-09 10:41:06.760302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.071 [2024-12-09 10:41:06.760462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.071 [2024-12-09 10:41:06.760472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.072 [2024-12-09 10:41:06.760478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.072 [2024-12-09 10:41:06.760484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.072 [2024-12-09 10:41:06.772622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.072 [2024-12-09 10:41:06.773046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.072 [2024-12-09 10:41:06.773087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.072 [2024-12-09 10:41:06.773113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.072 [2024-12-09 10:41:06.773700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.072 [2024-12-09 10:41:06.774210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.072 [2024-12-09 10:41:06.774221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.072 [2024-12-09 10:41:06.774230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.072 [2024-12-09 10:41:06.774237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.072 [2024-12-09 10:41:06.785580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.072 [2024-12-09 10:41:06.785936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.072 [2024-12-09 10:41:06.785955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.072 [2024-12-09 10:41:06.785963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.072 [2024-12-09 10:41:06.786132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.072 [2024-12-09 10:41:06.786302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.072 [2024-12-09 10:41:06.786312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.072 [2024-12-09 10:41:06.786318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.072 [2024-12-09 10:41:06.786325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.331 [2024-12-09 10:41:06.798548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.331 [2024-12-09 10:41:06.798975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.331 [2024-12-09 10:41:06.798993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.331 [2024-12-09 10:41:06.799001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.331 [2024-12-09 10:41:06.799169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.331 [2024-12-09 10:41:06.799339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.331 [2024-12-09 10:41:06.799349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.331 [2024-12-09 10:41:06.799356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.331 [2024-12-09 10:41:06.799363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.331 [2024-12-09 10:41:06.811349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.331 [2024-12-09 10:41:06.811777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.331 [2024-12-09 10:41:06.811835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.331 [2024-12-09 10:41:06.811861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.331 [2024-12-09 10:41:06.812268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.331 [2024-12-09 10:41:06.812439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.331 [2024-12-09 10:41:06.812449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.331 [2024-12-09 10:41:06.812455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.331 [2024-12-09 10:41:06.812462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.331 [2024-12-09 10:41:06.824103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.331 [2024-12-09 10:41:06.824438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.331 [2024-12-09 10:41:06.824455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.331 [2024-12-09 10:41:06.824462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.331 [2024-12-09 10:41:06.824622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.331 [2024-12-09 10:41:06.824782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.331 [2024-12-09 10:41:06.824792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.331 [2024-12-09 10:41:06.824798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.824804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.836879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.837263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.837309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.837333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.837932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.838320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.838329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.838335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.838342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.849716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.850148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.850196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.850221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.850807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.851311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.851320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.851326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.851333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.862551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.862964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.862981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.862992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.863151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.863312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.863322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.863328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.863334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.875417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.875869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.875893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.876350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.876520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.876530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.876536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.876542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.888194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.888581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.888598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.888606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.888765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.888933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.888943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.888950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.888956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.901328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.901746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.901755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.901935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.902113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.902124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.902130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.902137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.914363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.914789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.914812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.914820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.914995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.915168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.915178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.915185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.915192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.927323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.927625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.927642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.927650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.927825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.927996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.928006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.928013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.928019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.940155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.940570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.940610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.940637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.941236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.941427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.941435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-12-09 10:41:06.941445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-12-09 10:41:06.941451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-12-09 10:41:06.953064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-12-09 10:41:06.953471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-12-09 10:41:06.953489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-12-09 10:41:06.953496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.332 [2024-12-09 10:41:06.953656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.332 [2024-12-09 10:41:06.953823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-12-09 10:41:06.953833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:06.953840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:06.953846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:06.965817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:06.966174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:06.966219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:06.966244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:06.966703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:06.966881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:06.966891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:06.966898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:06.966905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:06.978620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:06.978959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:06.978977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:06.978986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:06.979146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:06.979308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:06.979318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:06.979324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:06.979331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:06.991625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:06.992045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:06.992063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:06.992071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:06.992230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:06.992391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:06.992400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:06.992406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:06.992412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:07.004505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:07.004923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:07.004973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:07.004998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:07.005436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:07.005597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:07.005607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:07.005613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:07.005619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:07.017446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:07.017874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:07.017920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:07.017944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:07.018529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:07.018965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:07.018974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:07.018981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:07.018987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:07.030214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:07.030628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:07.030645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:07.030658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:07.030825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:07.030987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:07.030997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:07.031003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:07.031010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-12-09 10:41:07.043077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-12-09 10:41:07.043473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-12-09 10:41:07.043490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-12-09 10:41:07.043497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.333 [2024-12-09 10:41:07.043656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.333 [2024-12-09 10:41:07.043823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-12-09 10:41:07.043833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-12-09 10:41:07.043839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-12-09 10:41:07.043846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.056101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.056522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.056567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.056592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.057195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.057765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.057775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.057781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.057787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.068924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.069346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.069393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.069417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.069884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.070059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.070067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.070073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.070079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.081706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.082095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.082112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.082120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.082280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.082441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.082450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.082457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.082463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.094448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.094840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.094857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.094866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.095026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.095186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.095196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.095202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.095208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.107302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.107704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.107749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.107773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.108194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.108356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.108366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.108376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.108383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 9543.67 IOPS, 37.28 MiB/s [2024-12-09T09:41:07.316Z] [2024-12-09 10:41:07.120178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.120574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.120619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.120644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.121079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.121241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.121249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.121256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.121262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.132928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.133340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.133380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.133406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.133968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.134130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.134138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.134144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.134150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.145672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.146095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.146113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.146121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.146289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.146459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.146469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.146476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.146482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.158684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.159124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.159170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.159195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.159781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.160265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.160275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.160281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.160288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.171707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.172111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.172129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.172138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.172307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.172477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.172487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.172493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.172500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.184485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.184905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.184978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.185379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.185541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.185550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.185557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.185564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.197253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.197598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.197615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.197626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.197786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.197953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.197964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.197970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.197976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.210137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.210482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-12-09 10:41:07.210500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.592 [2024-12-09 10:41:07.210508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.592 [2024-12-09 10:41:07.210669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.592 [2024-12-09 10:41:07.210837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.592 [2024-12-09 10:41:07.210847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.592 [2024-12-09 10:41:07.210853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.592 [2024-12-09 10:41:07.210860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.592 [2024-12-09 10:41:07.223182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.592 [2024-12-09 10:41:07.223565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.223583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.223591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.223751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.223917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.223927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.223933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.223940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.235950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.236328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.236375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.236401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.236998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.237461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.237471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.237478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.237485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.248917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.249290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.249307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.249316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.249493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.249664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.249674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.249681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.249688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.261713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.262114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.262132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.262141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.262301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.262462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.262471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.262478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.262484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.274554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.274928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.274946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.274954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.275123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.275292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.275302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.275313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.275320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.287524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.287825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.287852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.288361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.288668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.288685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.288700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.288714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.593 [2024-12-09 10:41:07.302000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.593 [2024-12-09 10:41:07.302409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-12-09 10:41:07.302432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.593 [2024-12-09 10:41:07.302442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.593 [2024-12-09 10:41:07.302678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.593 [2024-12-09 10:41:07.302928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.593 [2024-12-09 10:41:07.302942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.593 [2024-12-09 10:41:07.302950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.593 [2024-12-09 10:41:07.302959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.314982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.315341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.315359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.315367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.315536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.315705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.315715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.315721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.315727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.327741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.327995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.328013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.328021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.328180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.328341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.328351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.328357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.328363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.340830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.341171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.341189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.341196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.341370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.341543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.341553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.341560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.341567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.353742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.354013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.354031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.354052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.354608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.354769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.354779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.354785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.354792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.366718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.367053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.367072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.367083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.367257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.367431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.367441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.367447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.367454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.379719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.380057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.380075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.380084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.380257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.380432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.380442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.380449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.380455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.392695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.393116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.393135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.393144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.393318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.393492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.393502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.393509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.393515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.405875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.406313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.406332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.406341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.406526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.406713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.406723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.406730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.406738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.418915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.419247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.419265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.419274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.419447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.419621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.419630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.419637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.419643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.431888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.432169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.432187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.432195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.432368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.432542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.432552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.432559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.432566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.444866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.445264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.445282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.445290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.445459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.445628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.445638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.445647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.445655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.457799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.458173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.458181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.853 [2024-12-09 10:41:07.458350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.853 [2024-12-09 10:41:07.458519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.853 [2024-12-09 10:41:07.458529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.853 [2024-12-09 10:41:07.458536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.853 [2024-12-09 10:41:07.458542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.853 [2024-12-09 10:41:07.470901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.853 [2024-12-09 10:41:07.471230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.853 [2024-12-09 10:41:07.471248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.853 [2024-12-09 10:41:07.471256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.471430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.471604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.471614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.471620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.471627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.483879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.484311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.484357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.484382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.484888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.485064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.485074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.485080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.485087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.496981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.497310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.497328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.497336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.497503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.497672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.497682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.497688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.497694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.509985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.510381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.510428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.510453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.510977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.511147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.511158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.511165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.511171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.522741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.523045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.523072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.523233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.523394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.523403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.523410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.523417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.535651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.536033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.536052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.536064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.536233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.536409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.536418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.536425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.536431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.548433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.548777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.548795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.548802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.548966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.549127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.549136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.549142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.549149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.561387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.854 [2024-12-09 10:41:07.561710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.854 [2024-12-09 10:41:07.561727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:29.854 [2024-12-09 10:41:07.561735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:29.854 [2024-12-09 10:41:07.561900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:29.854 [2024-12-09 10:41:07.562061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.854 [2024-12-09 10:41:07.562071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.854 [2024-12-09 10:41:07.562077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.854 [2024-12-09 10:41:07.562083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.854 [2024-12-09 10:41:07.574282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.574661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.574679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.574686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.574861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.575030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.575043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.575050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.575057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.587111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.587401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.587409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.587568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.587728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.587737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.587743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.587749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.599905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.600236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.600253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.600261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.600419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.600579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.600589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.600595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.600601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.612821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.613151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.613169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.613177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.613345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.613515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.613525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.613533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.613542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.625688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.626078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.626096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.626103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.626263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.626423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.626433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.626439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.626445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.638440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.638755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.638773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.638780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.638944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.639105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.639114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.639121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.639127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.651283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.651654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.651671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.651679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.651971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.652135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.652145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.652151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.652159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.664254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.664618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.664635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.664644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.664824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.664995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.665005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.665012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.665018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.677253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.677604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.677622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.677629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.677798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.677972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.114 [2024-12-09 10:41:07.677983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.114 [2024-12-09 10:41:07.677989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.114 [2024-12-09 10:41:07.677996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.114 [2024-12-09 10:41:07.690206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.114 [2024-12-09 10:41:07.690558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.114 [2024-12-09 10:41:07.690576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.114 [2024-12-09 10:41:07.690584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.114 [2024-12-09 10:41:07.690753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.114 [2024-12-09 10:41:07.690925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.690935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.690942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.690949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.703044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.703431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.703449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.703456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.703619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.703780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.703790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.703796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.703803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.715812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.716132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.716149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.716157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.716317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.716476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.716486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.716492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.716498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.728648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.729023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.729041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.729048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.729207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.729367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.729376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.729383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.729389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.741381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.741773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.741832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.741858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.742301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.742462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.742475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.742481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.742487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.754129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.754508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.754525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.754533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.754691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.754855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.754865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.754871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.754878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.766866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.767270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.767316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.767341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.767776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.767961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.767972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.767979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.767986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.779704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.780038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.780085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.780109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.780652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.780820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.780829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.780836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.780845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.792654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.793046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.793063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.793070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.793231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.793391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.793401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.793408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.793414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.805415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.805721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.805739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.805746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.805911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.115 [2024-12-09 10:41:07.806073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.115 [2024-12-09 10:41:07.806082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.115 [2024-12-09 10:41:07.806089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.115 [2024-12-09 10:41:07.806095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.115 [2024-12-09 10:41:07.818223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.115 [2024-12-09 10:41:07.818590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.115 [2024-12-09 10:41:07.818606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.115 [2024-12-09 10:41:07.818613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.115 [2024-12-09 10:41:07.818773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.116 [2024-12-09 10:41:07.818940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.116 [2024-12-09 10:41:07.818950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.116 [2024-12-09 10:41:07.818956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.116 [2024-12-09 10:41:07.818962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.116 [2024-12-09 10:41:07.831086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.116 [2024-12-09 10:41:07.831496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.116 [2024-12-09 10:41:07.831517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.116 [2024-12-09 10:41:07.831525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.116 [2024-12-09 10:41:07.831693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.116 [2024-12-09 10:41:07.831868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.116 [2024-12-09 10:41:07.831879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.116 [2024-12-09 10:41:07.831886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.116 [2024-12-09 10:41:07.831893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.374 [2024-12-09 10:41:07.844130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.374 [2024-12-09 10:41:07.844528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.374 [2024-12-09 10:41:07.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.374 [2024-12-09 10:41:07.844553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.374 [2024-12-09 10:41:07.844722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.374 [2024-12-09 10:41:07.844897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.374 [2024-12-09 10:41:07.844908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.374 [2024-12-09 10:41:07.844914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.374 [2024-12-09 10:41:07.844921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.374 [2024-12-09 10:41:07.856918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.374 [2024-12-09 10:41:07.857238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.374 [2024-12-09 10:41:07.857256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.374 [2024-12-09 10:41:07.857263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.374 [2024-12-09 10:41:07.857422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.374 [2024-12-09 10:41:07.857583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.374 [2024-12-09 10:41:07.857592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.374 [2024-12-09 10:41:07.857599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.374 [2024-12-09 10:41:07.857605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.374 [2024-12-09 10:41:07.869740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.374 [2024-12-09 10:41:07.870127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.374 [2024-12-09 10:41:07.870144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.374 [2024-12-09 10:41:07.870152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.374 [2024-12-09 10:41:07.870316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.374 [2024-12-09 10:41:07.870476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.374 [2024-12-09 10:41:07.870486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.374 [2024-12-09 10:41:07.870492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.374 [2024-12-09 10:41:07.870498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.374 [2024-12-09 10:41:07.882584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.374 [2024-12-09 10:41:07.882998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.374 [2024-12-09 10:41:07.883015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.374 [2024-12-09 10:41:07.883023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.374 [2024-12-09 10:41:07.883183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.374 [2024-12-09 10:41:07.883344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.883353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.883359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.883366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.895344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.895734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.895759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.895924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.896085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.896094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.896101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.896107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.908073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.908447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.908493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.908518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.908936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.909099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.909111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.909117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.909124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.920811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.921195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.921204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.921372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.921541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.921550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.921557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.921564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.933789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.934188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.934206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.934215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.934385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.934554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.934564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.934570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.934577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.946554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.946872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.946890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.946898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.947058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.947218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.947227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.947234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.947241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.959380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.959775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.959792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.959800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.959965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.960125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.960135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.960141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.960147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.972224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.972619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.972664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.972689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.973116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.973278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.973288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.973294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.973301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.985052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.985424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.985442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.985450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.985610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.985770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.985779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.985785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.985791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:07.997930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:07.998238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:07.998259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:07.998266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:07.998425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.375 [2024-12-09 10:41:07.998586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.375 [2024-12-09 10:41:07.998595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.375 [2024-12-09 10:41:07.998601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.375 [2024-12-09 10:41:07.998607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.375 [2024-12-09 10:41:08.010694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.375 [2024-12-09 10:41:08.011009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.375 [2024-12-09 10:41:08.011026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.375 [2024-12-09 10:41:08.011034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.375 [2024-12-09 10:41:08.011193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.011353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.011362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.011368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.011374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.023567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.023907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.023925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.023932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.024092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.024253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.024262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.024268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.024274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.036406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.036688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.036735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.036760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.037292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.037662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.037680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.037694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.037707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.050928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.051379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.051400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.051410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.051646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.051889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.051902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.051911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.051920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.063693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.064090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.064107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.064116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.064276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.064437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.064446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.064453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.064459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.076622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.077032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.077050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.077058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.077230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.077404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.077414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.077423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.077430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.376 [2024-12-09 10:41:08.089373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.376 [2024-12-09 10:41:08.089755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.376 [2024-12-09 10:41:08.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.376 [2024-12-09 10:41:08.089839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.376 [2024-12-09 10:41:08.090332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.376 [2024-12-09 10:41:08.090494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.376 [2024-12-09 10:41:08.090503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.376 [2024-12-09 10:41:08.090509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.376 [2024-12-09 10:41:08.090515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.102266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.102593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.102611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.102619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.102787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.102963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.102974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.102980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.102987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 7157.75 IOPS, 27.96 MiB/s [2024-12-09T09:41:08.359Z] [2024-12-09 10:41:08.116156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.116551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.116569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.116576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.116736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.116902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.116912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.116918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.116925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.128908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.129219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.129236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.129243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.129403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.129563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.129572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.129578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.129584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.141715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.142104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.142122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.142130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.142299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.142468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.142478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.142484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.142491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.154514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.154906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.154923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.154931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.155090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.155250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.155260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.155266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.155272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.167352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.167741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.167761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.167785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.167961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.168131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.168141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.168147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.168154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.180195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.180596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.180641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.180665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.181266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.181574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.181584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.181590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.181597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.193233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.635 [2024-12-09 10:41:08.193646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.635 [2024-12-09 10:41:08.193691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.635 [2024-12-09 10:41:08.193715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.635 [2024-12-09 10:41:08.194315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.635 [2024-12-09 10:41:08.194753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.635 [2024-12-09 10:41:08.194763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.635 [2024-12-09 10:41:08.194769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.635 [2024-12-09 10:41:08.194776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.635 [2024-12-09 10:41:08.206263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.206665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.206711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.206735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.207340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.207709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.207718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.207725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.207731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.219142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.219614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.220216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.220707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.220716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.220722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.220728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.231978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.232393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.232400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.232560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.232720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.232729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.232736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.232742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.244852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.245250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.245296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.245321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.245784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.245951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.245961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.245971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.245978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.257663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.257985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.258002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.258010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.258170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.258331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.258340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.258346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.258353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.270486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.270881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.270900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.270908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.271069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.271230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.271240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.271246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.271252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.283237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.283632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.283649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.283657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.283822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.283983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.283993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.283999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.284005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.296188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.296607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.296625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.296632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.296792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.296959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.296969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.296975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.296981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.308976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.309329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.309346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.309354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.309513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.309675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.309684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.309690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.309697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.636 [2024-12-09 10:41:08.321842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.636 [2024-12-09 10:41:08.322255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.636 [2024-12-09 10:41:08.322300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.636 [2024-12-09 10:41:08.322326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.636 [2024-12-09 10:41:08.322864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.636 [2024-12-09 10:41:08.323026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.636 [2024-12-09 10:41:08.323034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.636 [2024-12-09 10:41:08.323040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.636 [2024-12-09 10:41:08.323046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.637 [2024-12-09 10:41:08.334582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.637 [2024-12-09 10:41:08.334970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-09 10:41:08.334991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.637 [2024-12-09 10:41:08.334999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.637 [2024-12-09 10:41:08.335158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.637 [2024-12-09 10:41:08.335319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.637 [2024-12-09 10:41:08.335328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.637 [2024-12-09 10:41:08.335335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.637 [2024-12-09 10:41:08.335341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.637 [2024-12-09 10:41:08.347425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.637 [2024-12-09 10:41:08.347838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.637 [2024-12-09 10:41:08.347855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.637 [2024-12-09 10:41:08.347863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.637 [2024-12-09 10:41:08.348023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.637 [2024-12-09 10:41:08.348184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.637 [2024-12-09 10:41:08.348193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.637 [2024-12-09 10:41:08.348199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.637 [2024-12-09 10:41:08.348205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.360437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.360859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.360877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.360886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.895 [2024-12-09 10:41:08.361054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.895 [2024-12-09 10:41:08.361224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.895 [2024-12-09 10:41:08.361234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.895 [2024-12-09 10:41:08.361240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.895 [2024-12-09 10:41:08.361246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.373265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.373675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.373692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.373700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.895 [2024-12-09 10:41:08.373866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.895 [2024-12-09 10:41:08.374030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.895 [2024-12-09 10:41:08.374039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.895 [2024-12-09 10:41:08.374045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.895 [2024-12-09 10:41:08.374052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.386146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.386581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.386599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.386607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.895 [2024-12-09 10:41:08.386767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.895 [2024-12-09 10:41:08.386934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.895 [2024-12-09 10:41:08.386944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.895 [2024-12-09 10:41:08.386950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.895 [2024-12-09 10:41:08.386958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.398922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.399326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.399345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.399353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.895 [2024-12-09 10:41:08.399525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.895 [2024-12-09 10:41:08.399695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.895 [2024-12-09 10:41:08.399705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.895 [2024-12-09 10:41:08.399712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.895 [2024-12-09 10:41:08.399718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.411792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.412192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.412238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.412262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.895 [2024-12-09 10:41:08.412857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.895 [2024-12-09 10:41:08.413061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.895 [2024-12-09 10:41:08.413071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.895 [2024-12-09 10:41:08.413081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.895 [2024-12-09 10:41:08.413088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.895 [2024-12-09 10:41:08.424565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.895 [2024-12-09 10:41:08.424980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.895 [2024-12-09 10:41:08.424998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.895 [2024-12-09 10:41:08.425005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.425166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.425326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.425336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.425342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.425348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.437361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.437704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.437722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.437730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.437906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.438077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.438086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.438093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.438099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.450479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.450920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.450938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.450946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.451120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.451281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.451290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.451297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.451303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.463494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.463890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.463908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.463916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.464085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.464255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.464265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.464271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.464278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.476364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.476798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.476857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.476882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.477409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.477570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.477580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.477586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.477592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.489349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.489769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.489788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.489796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.489970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.490140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.490150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.490156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.490163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.502335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.502757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.502774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.502785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.502960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.503129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.503137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.503144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.503150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.515367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.515743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.515767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.515942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.516112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.516121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.516127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.516133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.528222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.528540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.528556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.528563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.528723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.528890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.528899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.528905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.528911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.541038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.541347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.541363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.896 [2024-12-09 10:41:08.541370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.896 [2024-12-09 10:41:08.541530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.896 [2024-12-09 10:41:08.541693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.896 [2024-12-09 10:41:08.541701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.896 [2024-12-09 10:41:08.541707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.896 [2024-12-09 10:41:08.541713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.896 [2024-12-09 10:41:08.553805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.896 [2024-12-09 10:41:08.554226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.896 [2024-12-09 10:41:08.554271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.897 [2024-12-09 10:41:08.554295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.897 [2024-12-09 10:41:08.554896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.897 [2024-12-09 10:41:08.555469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.897 [2024-12-09 10:41:08.555477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.897 [2024-12-09 10:41:08.555483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.897 [2024-12-09 10:41:08.555490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.897 [2024-12-09 10:41:08.566552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.897 [2024-12-09 10:41:08.566939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.897 [2024-12-09 10:41:08.566956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.897 [2024-12-09 10:41:08.566963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.897 [2024-12-09 10:41:08.567122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.897 [2024-12-09 10:41:08.567282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.897 [2024-12-09 10:41:08.567290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.897 [2024-12-09 10:41:08.567295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.897 [2024-12-09 10:41:08.567301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.897 [2024-12-09 10:41:08.579379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.897 [2024-12-09 10:41:08.579775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.897 [2024-12-09 10:41:08.579792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.897 [2024-12-09 10:41:08.579817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.897 [2024-12-09 10:41:08.580003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.897 [2024-12-09 10:41:08.580172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.897 [2024-12-09 10:41:08.580180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.897 [2024-12-09 10:41:08.580190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.897 [2024-12-09 10:41:08.580197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.897 [2024-12-09 10:41:08.592230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.897 [2024-12-09 10:41:08.592608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.897 [2024-12-09 10:41:08.592653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.897 [2024-12-09 10:41:08.592676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.897 [2024-12-09 10:41:08.593273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.897 [2024-12-09 10:41:08.593790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.897 [2024-12-09 10:41:08.593815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.897 [2024-12-09 10:41:08.593831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.897 [2024-12-09 10:41:08.593844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.897 [2024-12-09 10:41:08.607390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.897 [2024-12-09 10:41:08.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.897 [2024-12-09 10:41:08.607925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:30.897 [2024-12-09 10:41:08.607936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:30.897 [2024-12-09 10:41:08.608191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:30.897 [2024-12-09 10:41:08.608446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.897 [2024-12-09 10:41:08.608458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.897 [2024-12-09 10:41:08.608467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.897 [2024-12-09 10:41:08.608476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.156 [2024-12-09 10:41:08.620432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.156 [2024-12-09 10:41:08.620833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.156 [2024-12-09 10:41:08.620851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.156 [2024-12-09 10:41:08.620858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.156 [2024-12-09 10:41:08.621032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.156 [2024-12-09 10:41:08.621205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.156 [2024-12-09 10:41:08.621213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.156 [2024-12-09 10:41:08.621220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.156 [2024-12-09 10:41:08.621226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.156 [2024-12-09 10:41:08.633294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.156 [2024-12-09 10:41:08.633684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.156 [2024-12-09 10:41:08.633701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.156 [2024-12-09 10:41:08.633708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.156 [2024-12-09 10:41:08.633892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.156 [2024-12-09 10:41:08.634062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.156 [2024-12-09 10:41:08.634070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.156 [2024-12-09 10:41:08.634076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.156 [2024-12-09 10:41:08.634083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.156 [2024-12-09 10:41:08.646122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.156 [2024-12-09 10:41:08.646508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.156 [2024-12-09 10:41:08.646523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.156 [2024-12-09 10:41:08.646530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.156 [2024-12-09 10:41:08.646690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.156 [2024-12-09 10:41:08.646858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.156 [2024-12-09 10:41:08.646866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.156 [2024-12-09 10:41:08.646873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.156 [2024-12-09 10:41:08.646878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.156 [2024-12-09 10:41:08.658857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.156 [2024-12-09 10:41:08.659242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.156 [2024-12-09 10:41:08.659259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.156 [2024-12-09 10:41:08.659266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.156 [2024-12-09 10:41:08.659425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.156 [2024-12-09 10:41:08.659584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.156 [2024-12-09 10:41:08.659592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.156 [2024-12-09 10:41:08.659597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.156 [2024-12-09 10:41:08.659603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.156 [2024-12-09 10:41:08.671819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.156 [2024-12-09 10:41:08.672132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.672149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.672159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.672318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.672479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.672486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.672493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.672498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.684581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.684972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.684988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.684995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.685156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.685316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.685324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.685329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.685335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.697322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.697666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.697683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.697690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.697882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.698057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.698065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.698072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.698078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.710318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.710717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.710734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.710741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.710934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.711112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.711120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.711127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.711133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.723251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.723672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.723688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.723696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.723887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.724061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.724069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.724077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.724083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.736012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.736395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.736411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.736418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.736576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.736738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.736746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.736752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.736757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.748863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.749200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.749216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.749223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.749382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.749543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.749551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.749560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.157 [2024-12-09 10:41:08.749566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.157 [2024-12-09 10:41:08.761668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.157 [2024-12-09 10:41:08.762068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.157 [2024-12-09 10:41:08.762113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.157 [2024-12-09 10:41:08.762137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.157 [2024-12-09 10:41:08.762685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.157 [2024-12-09 10:41:08.762853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.157 [2024-12-09 10:41:08.762862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.157 [2024-12-09 10:41:08.762868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.762874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.774593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.775022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.775040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.775047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.775220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.775395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.775403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.775409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.775416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.787484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.787943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.787959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.787966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.788125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.788284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.788292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.788298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.788304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.800399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.800835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.800881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.800904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.801318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.801477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.801485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.801491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.801497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.813219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.813601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.813617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.813624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.813793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.813968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.813976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.813982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.813988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.826056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.826437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.826444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.826604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.826764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.826772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.826778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.826783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.838823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.839110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.839156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.839186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.839771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.840268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.840277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.840283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.840289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.851588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.851958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.851974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.851981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.852141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.158 [2024-12-09 10:41:08.852301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.158 [2024-12-09 10:41:08.852308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.158 [2024-12-09 10:41:08.852314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.158 [2024-12-09 10:41:08.852320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.158 [2024-12-09 10:41:08.864508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.158 [2024-12-09 10:41:08.864895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.158 [2024-12-09 10:41:08.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.158 [2024-12-09 10:41:08.864919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.158 [2024-12-09 10:41:08.865078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.159 [2024-12-09 10:41:08.865238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.159 [2024-12-09 10:41:08.865246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.159 [2024-12-09 10:41:08.865252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.159 [2024-12-09 10:41:08.865258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.159 [2024-12-09 10:41:08.877630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.159 [2024-12-09 10:41:08.877977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.159 [2024-12-09 10:41:08.877994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.159 [2024-12-09 10:41:08.878002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.159 [2024-12-09 10:41:08.878176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.417 [2024-12-09 10:41:08.878352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.417 [2024-12-09 10:41:08.878361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.417 [2024-12-09 10:41:08.878367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.417 [2024-12-09 10:41:08.878373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.417 [2024-12-09 10:41:08.890582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.417 [2024-12-09 10:41:08.890928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.417 [2024-12-09 10:41:08.890945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.417 [2024-12-09 10:41:08.890952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.417 [2024-12-09 10:41:08.891111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.417 [2024-12-09 10:41:08.891271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.417 [2024-12-09 10:41:08.891279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.417 [2024-12-09 10:41:08.891284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.417 [2024-12-09 10:41:08.891290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.417 [2024-12-09 10:41:08.903421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.417 [2024-12-09 10:41:08.903835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.417 [2024-12-09 10:41:08.903852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.417 [2024-12-09 10:41:08.903859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.417 [2024-12-09 10:41:08.904018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.417 [2024-12-09 10:41:08.904178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.417 [2024-12-09 10:41:08.904186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.417 [2024-12-09 10:41:08.904192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.417 [2024-12-09 10:41:08.904198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.417 [2024-12-09 10:41:08.916300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.417 [2024-12-09 10:41:08.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.417 [2024-12-09 10:41:08.916719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.417 [2024-12-09 10:41:08.916726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.417 [2024-12-09 10:41:08.916891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.417 [2024-12-09 10:41:08.917051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.417 [2024-12-09 10:41:08.917059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.417 [2024-12-09 10:41:08.917065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.417 [2024-12-09 10:41:08.917076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.417 [2024-12-09 10:41:08.929167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.417 [2024-12-09 10:41:08.929515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.417 [2024-12-09 10:41:08.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.929583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.930139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.930300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.930308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.930314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.930320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:08.941965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:08.942289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:08.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.942350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.942949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.943435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.943443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.943449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.943455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:08.954712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:08.955016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:08.955033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.955041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.955214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.955388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.955397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.955403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.955409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:08.967836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:08.968191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:08.968207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.968214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.968383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.968552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.968560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.968567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.968573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:08.980838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:08.981221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:08.981265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.981289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.981888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.982472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.982480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.982486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.982492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:08.993747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:08.994110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:08.994126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:08.994133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:08.994301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:08.994469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:08.994477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:08.994484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:08.994490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:09.006776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:09.007222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:09.007276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:09.007300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:09.007904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:09.008356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:09.008373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:09.008387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:09.008401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.418 [2024-12-09 10:41:09.021833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.418 [2024-12-09 10:41:09.022277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.418 [2024-12-09 10:41:09.022299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.418 [2024-12-09 10:41:09.022310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.418 [2024-12-09 10:41:09.022565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.418 [2024-12-09 10:41:09.022828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.418 [2024-12-09 10:41:09.022840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.418 [2024-12-09 10:41:09.022850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.418 [2024-12-09 10:41:09.022858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.034887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.035243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.035260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.035268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.035441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.035614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.035623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.035629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.035635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.047631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.047941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.047957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.047964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.048125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.048283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.048294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.048300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.048305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.060406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.060820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.060837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.060844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.061003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.061164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.061172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.061178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.061184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.073147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.073537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.073554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.073561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.073721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.073888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.073897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.073903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.073909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.086000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.086379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.086386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.086545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.086704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.086713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.086719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.086728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.098812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.099129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.099145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.099152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.099312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.099472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.099480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.099486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.099492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.111651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.112033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.112040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.112199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.112359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.112367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.112372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.112378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 5726.20 IOPS, 22.37 MiB/s [2024-12-09T09:41:09.143Z] [2024-12-09 10:41:09.124534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.124965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.419 [2024-12-09 10:41:09.125028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.419 [2024-12-09 10:41:09.125584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.419 [2024-12-09 10:41:09.125744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.419 [2024-12-09 10:41:09.125752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.419 [2024-12-09 10:41:09.125758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.419 [2024-12-09 10:41:09.125763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.419 [2024-12-09 10:41:09.137577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.419 [2024-12-09 10:41:09.137975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.419 [2024-12-09 10:41:09.137992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.420 [2024-12-09 10:41:09.137999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.420 [2024-12-09 10:41:09.138167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.420 [2024-12-09 10:41:09.138337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.420 [2024-12-09 10:41:09.138345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.420 [2024-12-09 10:41:09.138351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.420 [2024-12-09 10:41:09.138357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.678 [2024-12-09 10:41:09.150626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.678 [2024-12-09 10:41:09.150973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.678 [2024-12-09 10:41:09.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.678 [2024-12-09 10:41:09.151008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.678 [2024-12-09 10:41:09.151168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.678 [2024-12-09 10:41:09.151328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.678 [2024-12-09 10:41:09.151336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.678 [2024-12-09 10:41:09.151342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.678 [2024-12-09 10:41:09.151348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.678 [2024-12-09 10:41:09.163493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.678 [2024-12-09 10:41:09.163817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.678 [2024-12-09 10:41:09.163834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.678 [2024-12-09 10:41:09.163841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.164000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.164159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.164167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.164173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.164178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.176356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.176750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.176766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.176773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.176963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.177132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.177140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.177147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.177153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.189230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.189622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.189638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.189645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.189805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.189973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.189981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.189987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.189993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.201983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.202372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.202388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.202396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.202556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.202716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.202724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.202730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.202735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.214949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.215342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.215359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.215366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.215534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.215703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.215714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.215721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.215727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.228009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.228331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.228348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.228356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.228524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.228693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.228701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.228708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.228715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.240856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.241231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.241254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.241414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.241574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.241582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.241588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.241594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.253730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.254088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.254133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.254158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.254741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.255250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.255259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.255265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.255275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.266490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.266879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.266896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.266903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.267062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.267223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.267231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.267236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.267242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.279225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.279609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.679 [2024-12-09 10:41:09.279625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.679 [2024-12-09 10:41:09.279633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.679 [2024-12-09 10:41:09.279800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.679 [2024-12-09 10:41:09.279974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.679 [2024-12-09 10:41:09.279983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.679 [2024-12-09 10:41:09.279989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.679 [2024-12-09 10:41:09.279995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.679 [2024-12-09 10:41:09.292034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.679 [2024-12-09 10:41:09.292424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.292440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.292447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.292607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.292766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.292774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.292780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.292785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.304957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.305355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.305371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.305378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.305537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.305697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.305705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.305710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.305716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.317825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.318224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.318270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.318293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.318892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.319373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.319381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.319387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.319393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.330694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.331066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.331083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.331090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.331249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.331410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.331419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.331427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.331435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.343441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.343815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.343832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.343839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.344024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.344194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.344202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.344208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.344214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.356270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.356665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.356672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.356855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.357025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.357034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.357040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.357046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.369109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.369480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.369548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.370160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.370631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.370639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.370645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.370651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.381944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.382330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.382346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.382353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.382513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.382691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.382702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.382709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.382715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.680 [2024-12-09 10:41:09.394729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.680 [2024-12-09 10:41:09.395146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.680 [2024-12-09 10:41:09.395190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.680 [2024-12-09 10:41:09.395214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.680 [2024-12-09 10:41:09.395615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.680 [2024-12-09 10:41:09.395784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.680 [2024-12-09 10:41:09.395792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.680 [2024-12-09 10:41:09.395799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.680 [2024-12-09 10:41:09.395805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.407673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.408079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.408096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.408103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.408277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.408452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.408460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.408467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.408473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.420669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.421075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.421093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.421100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.421268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.421437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.421445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.421452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.421461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.433529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.433902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.433927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.434087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.434247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.434255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.434261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.434267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.446325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.446709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.446728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.446736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.446912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.447083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.447091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.447098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.447104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.459157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.459526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.459542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.459549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.459708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.459872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.459880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.459887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.459893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.472132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.472544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.939 [2024-12-09 10:41:09.472564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.939 [2024-12-09 10:41:09.472572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.939 [2024-12-09 10:41:09.472741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.939 [2024-12-09 10:41:09.472934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.939 [2024-12-09 10:41:09.472943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.939 [2024-12-09 10:41:09.472951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.939 [2024-12-09 10:41:09.472957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.939 [2024-12-09 10:41:09.485225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.939 [2024-12-09 10:41:09.485621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.485638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.485645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.485819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.485989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.485997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.486003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.486010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.498221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.498546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.498562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.498570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.498738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.498911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.498920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.498926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.498932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.511141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.511566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.511612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.511636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.512246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.512789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.512797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.512803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.512819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.524002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.524352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.524359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.524519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.524679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.524687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.524693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.524698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.536922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.537318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.537333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.537340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.537500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.537658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.537666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.537672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.537678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.549755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.550159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.550176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.550184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.550353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.550522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.550536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.550542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.550548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.562565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.562964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.563011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.563034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.563619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.563813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.563821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.940 [2024-12-09 10:41:09.563828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.940 [2024-12-09 10:41:09.563834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.940 [2024-12-09 10:41:09.575319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.940 [2024-12-09 10:41:09.575713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.940 [2024-12-09 10:41:09.575757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.940 [2024-12-09 10:41:09.575781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.940 [2024-12-09 10:41:09.576582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.940 [2024-12-09 10:41:09.576992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.940 [2024-12-09 10:41:09.577001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.577008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.577014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 [2024-12-09 10:41:09.588144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 [2024-12-09 10:41:09.588508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.588525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.588532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.588692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.588857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.588866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.588872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.588877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 [2024-12-09 10:41:09.601008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 [2024-12-09 10:41:09.601390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.601432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.601458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.602014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.602174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.602182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.602188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.602194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 [2024-12-09 10:41:09.613958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 [2024-12-09 10:41:09.614356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.614373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.614379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.614539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.614699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.614706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.614712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.614718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 [2024-12-09 10:41:09.626785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2814417 Killed "${NVMF_APP[@]}" "$@" 00:30:31.941 [2024-12-09 10:41:09.627180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.627197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.627204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.627372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.627540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.627548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.627555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.627560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2815834 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2815834 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2815834 ']' 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.941 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.941 [2024-12-09 10:41:09.639759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 [2024-12-09 10:41:09.640078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.640096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.640103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.640277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.640451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.640460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.640467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.640473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.941 [2024-12-09 10:41:09.652886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.941 [2024-12-09 10:41:09.653276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.941 [2024-12-09 10:41:09.653293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:31.941 [2024-12-09 10:41:09.653301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:31.941 [2024-12-09 10:41:09.653475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:31.941 [2024-12-09 10:41:09.653648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.941 [2024-12-09 10:41:09.653657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.941 [2024-12-09 10:41:09.653664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.941 [2024-12-09 10:41:09.653671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.665927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.666335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.666352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.666360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.666533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.666706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.666714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.666720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.666726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.678897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.679230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.679248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.679255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.679425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.679594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.679602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.679608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.679615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.684396] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:32.199 [2024-12-09 10:41:09.684434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.199 [2024-12-09 10:41:09.692130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.692537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.692555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.692563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.692737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.692916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.692925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.692932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.692939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.705108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.705528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.705546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.705553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.705728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.705905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.705915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.705921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.705927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.718219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.718626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.718643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.718651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.718829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.719002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.719010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.719017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.719023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.731190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.731608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.731625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.731634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.731814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.731989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.731998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.732004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.732011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.744244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.744647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.744665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.744676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.744854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.745029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.745038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.745044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.745051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.199 [2024-12-09 10:41:09.757296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.199 [2024-12-09 10:41:09.757697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.199 [2024-12-09 10:41:09.757714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.199 [2024-12-09 10:41:09.757722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.199 [2024-12-09 10:41:09.757900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.199 [2024-12-09 10:41:09.758075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.199 [2024-12-09 10:41:09.758083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.199 [2024-12-09 10:41:09.758089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.199 [2024-12-09 10:41:09.758096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.763622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:32.200 [2024-12-09 10:41:09.770352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.770758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.770777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.770785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.770966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.771142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.771151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.771158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.771164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.783307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.783721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.783729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.783920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.784100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.784108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.784115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.784122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.796209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.796607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.796624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.796632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.796801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.796996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.797005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.797012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.797019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.805137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.200 [2024-12-09 10:41:09.805161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.200 [2024-12-09 10:41:09.805168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.200 [2024-12-09 10:41:09.805174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.200 [2024-12-09 10:41:09.805179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.200 [2024-12-09 10:41:09.806495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.200 [2024-12-09 10:41:09.806605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.200 [2024-12-09 10:41:09.806606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.200 [2024-12-09 10:41:09.809193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.809601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.809619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.809627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.809803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.809982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.809991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.809998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.810005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.822256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.822608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.822628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.822636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.822815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.822992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.823000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.823007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.823014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.835285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.835709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.835728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.835736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.835916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.836094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.836103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.836110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.836117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.848351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.848778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.848798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.848806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.848986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.849161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.849169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.849177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.849184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.861420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.861848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.861868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.861882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.862056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.862231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.862239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.862246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.862253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.874496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.874906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.874925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.874933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.875107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.875281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.875289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.875296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.875303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 [2024-12-09 10:41:09.887540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.887885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.887904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.887911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.888085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.888260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.888269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.888276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.888282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.200 [2024-12-09 10:41:09.900659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:32.200 [2024-12-09 10:41:09.901067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.901085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.901092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.901270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.200 [2024-12-09 10:41:09.901443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.901452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.901460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.901466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.200 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.200 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.200 [2024-12-09 10:41:09.913713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.200 [2024-12-09 10:41:09.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.200 [2024-12-09 10:41:09.914008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.200 [2024-12-09 10:41:09.914016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.200 [2024-12-09 10:41:09.914191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.200 [2024-12-09 10:41:09.914365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.200 [2024-12-09 10:41:09.914374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.200 [2024-12-09 10:41:09.914380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.200 [2024-12-09 10:41:09.914386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.458 [2024-12-09 10:41:09.926785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.458 [2024-12-09 10:41:09.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.458 [2024-12-09 10:41:09.927139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.458 [2024-12-09 10:41:09.927147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.458 [2024-12-09 10:41:09.927319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.458 [2024-12-09 10:41:09.927493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.458 [2024-12-09 10:41:09.927501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.458 [2024-12-09 10:41:09.927508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.458 [2024-12-09 10:41:09.927515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.458 [2024-12-09 10:41:09.939751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.458 [2024-12-09 10:41:09.940046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.458 [2024-12-09 10:41:09.940063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.458 [2024-12-09 10:41:09.940071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.458 [2024-12-09 10:41:09.940244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.458 [2024-12-09 10:41:09.940417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.458 [2024-12-09 10:41:09.940425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.458 [2024-12-09 10:41:09.940432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.458 [2024-12-09 10:41:09.940438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.458 [2024-12-09 10:41:09.942944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.458 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.458 [2024-12-09 10:41:09.952844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.458 [2024-12-09 10:41:09.953178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.458 [2024-12-09 10:41:09.953195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.458 [2024-12-09 10:41:09.953202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.458 [2024-12-09 10:41:09.953376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.458 [2024-12-09 10:41:09.953550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.459 [2024-12-09 10:41:09.953558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.459 [2024-12-09 10:41:09.953565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.459 [2024-12-09 10:41:09.953572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.459 [2024-12-09 10:41:09.965853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.459 [2024-12-09 10:41:09.966146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.459 [2024-12-09 10:41:09.966163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.459 [2024-12-09 10:41:09.966170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.459 [2024-12-09 10:41:09.966344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.459 [2024-12-09 10:41:09.966518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.459 [2024-12-09 10:41:09.966527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.459 [2024-12-09 10:41:09.966534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.459 [2024-12-09 10:41:09.966540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.459 Malloc0 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.459 [2024-12-09 10:41:09.978957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.459 [2024-12-09 10:41:09.979365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.459 [2024-12-09 10:41:09.979382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.459 [2024-12-09 10:41:09.979390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.459 [2024-12-09 10:41:09.979564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.459 [2024-12-09 10:41:09.979738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.459 [2024-12-09 10:41:09.979747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.459 [2024-12-09 10:41:09.979754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.459 [2024-12-09 10:41:09.979760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.459 [2024-12-09 10:41:09.991991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.459 [2024-12-09 10:41:09.992397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.459 [2024-12-09 10:41:09.992413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb10120 with addr=10.0.0.2, port=4420 00:30:32.459 [2024-12-09 10:41:09.992421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb10120 is same with the state(6) to be set 00:30:32.459 [2024-12-09 10:41:09.992595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb10120 (9): Bad file descriptor 00:30:32.459 [2024-12-09 10:41:09.992768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.459 [2024-12-09 10:41:09.992776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.459 [2024-12-09 10:41:09.992783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.459 [2024-12-09 10:41:09.992789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.459 10:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.459 [2024-12-09 10:41:10.000192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.459 10:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.459 [2024-12-09 10:41:10.005084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.459 10:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2814905 00:30:32.459 [2024-12-09 10:41:10.027031] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:33.830 4924.50 IOPS, 19.24 MiB/s [2024-12-09T09:41:12.487Z] 5863.00 IOPS, 22.90 MiB/s [2024-12-09T09:41:13.422Z] 6579.38 IOPS, 25.70 MiB/s [2024-12-09T09:41:14.355Z] 7129.89 IOPS, 27.85 MiB/s [2024-12-09T09:41:15.287Z] 7569.00 IOPS, 29.57 MiB/s [2024-12-09T09:41:16.218Z] 7942.73 IOPS, 31.03 MiB/s [2024-12-09T09:41:17.146Z] 8223.25 IOPS, 32.12 MiB/s [2024-12-09T09:41:18.517Z] 8475.92 IOPS, 33.11 MiB/s [2024-12-09T09:41:19.448Z] 8691.43 IOPS, 33.95 MiB/s 00:30:41.724 Latency(us) 00:30:41.724 [2024-12-09T09:41:19.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.724 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.724 Verification LBA range: start 0x0 length 0x4000 00:30:41.724 Nvme1n1 : 15.00 8882.44 34.70 11144.00 0.00 6372.28 436.91 23093.64 00:30:41.724 [2024-12-09T09:41:19.448Z] =================================================================================================================== 00:30:41.724 [2024-12-09T09:41:19.448Z] Total : 8882.44 34.70 11144.00 0.00 6372.28 436.91 23093.64 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.724 rmmod nvme_tcp 00:30:41.724 rmmod nvme_fabrics 00:30:41.724 rmmod nvme_keyring 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2815834 ']' 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2815834 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2815834 ']' 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2815834 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815834 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815834' 00:30:41.724 killing process with pid 2815834 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2815834 00:30:41.724 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2815834 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.982 10:41:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.514 00:30:44.514 real 0m26.700s 00:30:44.514 user 1m3.076s 00:30:44.514 sys 0m6.699s 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.514 ************************************ 00:30:44.514 END TEST nvmf_bdevperf 00:30:44.514 ************************************ 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.514 ************************************ 00:30:44.514 START TEST nvmf_target_disconnect 00:30:44.514 ************************************ 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:44.514 * Looking for test storage... 00:30:44.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.514 --rc genhtml_branch_coverage=1 00:30:44.514 --rc genhtml_function_coverage=1 00:30:44.514 --rc genhtml_legend=1 00:30:44.514 --rc geninfo_all_blocks=1 00:30:44.514 --rc geninfo_unexecuted_blocks=1 00:30:44.514 00:30:44.514 ' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.514 --rc genhtml_branch_coverage=1 00:30:44.514 --rc genhtml_function_coverage=1 00:30:44.514 --rc genhtml_legend=1 00:30:44.514 --rc geninfo_all_blocks=1 00:30:44.514 --rc geninfo_unexecuted_blocks=1 00:30:44.514 00:30:44.514 ' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.514 --rc genhtml_branch_coverage=1 00:30:44.514 --rc genhtml_function_coverage=1 00:30:44.514 --rc genhtml_legend=1 00:30:44.514 --rc geninfo_all_blocks=1 00:30:44.514 --rc geninfo_unexecuted_blocks=1 00:30:44.514 00:30:44.514 ' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.514 --rc genhtml_branch_coverage=1 00:30:44.514 --rc genhtml_function_coverage=1 00:30:44.514 --rc genhtml_legend=1 00:30:44.514 --rc geninfo_all_blocks=1 00:30:44.514 --rc geninfo_unexecuted_blocks=1 00:30:44.514 00:30:44.514 ' 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.514 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.515 10:41:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:51.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:51.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.077 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:51.078 Found net devices under 0000:86:00.0: cvl_0_0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:51.078 Found net devices under 0000:86:00.1: cvl_0_1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:30:51.078 00:30:51.078 --- 10.0.0.2 ping statistics --- 00:30:51.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.078 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:30:51.078 00:30:51.078 --- 10.0.0.1 ping statistics --- 00:30:51.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.078 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:51.078 ************************************ 00:30:51.078 START TEST nvmf_target_disconnect_tc1 00:30:51.078 ************************************ 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:51.078 10:41:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.078 [2024-12-09 10:41:28.045891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.078 [2024-12-09 10:41:28.045934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a36ac0 with addr=10.0.0.2, port=4420 00:30:51.078 [2024-12-09 10:41:28.045972] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:51.078 [2024-12-09 10:41:28.045988] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:51.078 [2024-12-09 10:41:28.045994] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:51.078 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:51.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:51.078 Initializing NVMe Controllers 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:51.078 00:30:51.078 real 0m0.123s 00:30:51.078 user 0m0.060s 00:30:51.078 sys 0m0.062s 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:51.078 ************************************ 00:30:51.078 END TEST nvmf_target_disconnect_tc1 00:30:51.078 ************************************ 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.078 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:51.078 ************************************ 00:30:51.079 START TEST nvmf_target_disconnect_tc2 00:30:51.079 ************************************ 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2820997 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2820997 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2820997 ']' 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 [2024-12-09 10:41:28.184334] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:51.079 [2024-12-09 10:41:28.184373] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.079 [2024-12-09 10:41:28.264954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.079 [2024-12-09 10:41:28.306795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.079 [2024-12-09 10:41:28.306834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.079 [2024-12-09 10:41:28.306840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.079 [2024-12-09 10:41:28.306846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.079 [2024-12-09 10:41:28.306851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.079 [2024-12-09 10:41:28.308488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:51.079 [2024-12-09 10:41:28.308513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:51.079 [2024-12-09 10:41:28.308599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:51.079 [2024-12-09 10:41:28.308599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 Malloc0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 [2024-12-09 10:41:28.479413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 [2024-12-09 10:41:28.511659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2821021 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:51.079 10:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.994 10:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2820997 00:30:52.994 10:41:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 [2024-12-09 10:41:30.540293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Write completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 [2024-12-09 10:41:30.540505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.994 starting I/O failed 00:30:52.994 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 [2024-12-09 10:41:30.540701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Write completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 Read completed with error (sct=0, sc=8) 00:30:52.995 starting I/O failed 00:30:52.995 [2024-12-09 10:41:30.540912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.995 [2024-12-09 10:41:30.541099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.541275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.541431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.541571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.541774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.541910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.541923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.542929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.542940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.543035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.543044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.543277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.543288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.995 [2024-12-09 10:41:30.543466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.995 [2024-12-09 10:41:30.543476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.995 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.543540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.543549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.543747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.543759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.543902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.543914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.543999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.544841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.544991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.545001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.545206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.545358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.545389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.545737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.545768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.545975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.546145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.546316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.546451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.546654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.546870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.546892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.547105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.547137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.547286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.547317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.547508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.547540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.547708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.547741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.547944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.547977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.548244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.548265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.548501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.548524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.548781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.548803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.548899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.548920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.549091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.549113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.549230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.549251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.549405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.549427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.549680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.549702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.549886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.549909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.550080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.550106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.550261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.550283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.550380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.550401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.550581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.996 [2024-12-09 10:41:30.550613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.996 qpair failed and we were unable to recover it. 00:30:52.996 [2024-12-09 10:41:30.550793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.550832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.551021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.551053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.551310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.551331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.551500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.551521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.551744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.551776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.551960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.552025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.552254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.552290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.552619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.552882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.552915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.553118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.553150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.553344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.553377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.553637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.553670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.553856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.553890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.554026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.554058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.554180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.554203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.554320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.554341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.554544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.554575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.554824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.554857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.555047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.555078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.555231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.555253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.555473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.555495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.555760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.555970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.556003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.556130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.556167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.556448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.556477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.556653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.556681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.556864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.556894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.557079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.557108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.557233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.557262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.557470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.557500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.557805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.557858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.558046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.558076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.558207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.558236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.558477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.558505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.558741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.558964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.558994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.559201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.559230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.559453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.559482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.997 qpair failed and we were unable to recover it. 00:30:52.997 [2024-12-09 10:41:30.559675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.997 [2024-12-09 10:41:30.559704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.559961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.559991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.560255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.560470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.560499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.560730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.560758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.561055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.561084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.561285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.561314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.561679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.561711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.561926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.561959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.562100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.562270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.562297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.562549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.562585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.562776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.562806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.563012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.563043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.563213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.563244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.563470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.563502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.563753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.563785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.563927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.563956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.564191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.564221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.564499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.564531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.564801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.564843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.565124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.565152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.565289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.565317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.565552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.565584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.565829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.565863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.566087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.566118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.566377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.566412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.566606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.566637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.566773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.566803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.567139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.567324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.567355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.567542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.567572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.567701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.998 [2024-12-09 10:41:30.568079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.998 qpair failed and we were unable to recover it. 00:30:52.998 [2024-12-09 10:41:30.568345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.568377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.568612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.568643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.568869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.568902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.569089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.569121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.569311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.569342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.569563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.569594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.569734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.569766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.569970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.570004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.570147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.570179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.570351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.570384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.570504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.570536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.570711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.570742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.570981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.571014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.571236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.571267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.571543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.571576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.571785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.571984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.572017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.572223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.572256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.572496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.572529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.572774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.572981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.573014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.573209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.573240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.573423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.573454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.573749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.573782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.573945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.573978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.574220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.574253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.574508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.574540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.574844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.574879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.575092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.575124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.575316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.575349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.575633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.575666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:52.999 [2024-12-09 10:41:30.575882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.999 [2024-12-09 10:41:30.575916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:52.999 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.576047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.576079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.576277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.576310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.576568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.576600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.576731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.576762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.576946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.577154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.577187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.577380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.577412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.577525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.577557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.577864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.577898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.578145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.578177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.578311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.578361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.578588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.578619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.578833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.578867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.579056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.579087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.579337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.579375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.579613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.579645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.579934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.579968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.580248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.580279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.580417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.580449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.580658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.580689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.580908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.581228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.581259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.581449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.581481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.581745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.581777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.582019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.582052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.582193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.582224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.582465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.582497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.582705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.582737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.582940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.582974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.583206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.583238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.583464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.583496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.583765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.584014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.584047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.584187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.584216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.584397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.584427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.584617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.584649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.584934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.584967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.585176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.000 [2024-12-09 10:41:30.585208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.000 qpair failed and we were unable to recover it. 00:30:53.000 [2024-12-09 10:41:30.585397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.585428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.585643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.585676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.585895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.586082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.586113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.586323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.586354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.586659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.586690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.586925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.586960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.587084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.587115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.587380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.587411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.587545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.587760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.587791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.587931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.587964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.588182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.588214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.588337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.588367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.588636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.588669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.588860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.588894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.589036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.589067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.589268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.589352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.589683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.589720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.589909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.589946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.590081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.590113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.590333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.590365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.590623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.590655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.590954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.590989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.591121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.591154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.591396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.591429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.591607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.591640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.591935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.591968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.592146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.592319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.592351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.592566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.592613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.592817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.592851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.593077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.593239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.593394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.593611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.593826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.593973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.001 [2024-12-09 10:41:30.594017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.001 qpair failed and we were unable to recover it. 00:30:53.001 [2024-12-09 10:41:30.594155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.594187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.594448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.594481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.594762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.594795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.595025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.595057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.595297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.595329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.595669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.595935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.595968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.596110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.596352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.596384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.596582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.596613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.596762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.596795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.596982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.597014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.597161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.597193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.597494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.597526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.597718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.597750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.597949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.597982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.598173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.598206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.598345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.598378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.598581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.598613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.598836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.599114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.599147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.599371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.599403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.599651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.599682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.599857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.599891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.600162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.600298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.600331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.600550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.600582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.600816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.601016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.601048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.601192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.601224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.601470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.601501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.601783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.601831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.601961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.602000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.602233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.602266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.602563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.602595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.602736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.602769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.603039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.603072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.603293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.002 [2024-12-09 10:41:30.603325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.002 qpair failed and we were unable to recover it. 00:30:53.002 [2024-12-09 10:41:30.603608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.603639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.603835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.603870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.604063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.604095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.604235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.604267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.604512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.604544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.604820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.604853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.605047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.605080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.605296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.605328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.605549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.605580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.605845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.605879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.606122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.606155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.606282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.606314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.606585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.606617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.606757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.606790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.607050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.607082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.607283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.607314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.607511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.607543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.607727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.607759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.607896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.607928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.608223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.608375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.608407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.608627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.608660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.608933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.608966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.609229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.609261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.609399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.609431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.609683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.609955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.609988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.610258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.610588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.610620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.610825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.611006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.611039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.611221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.611253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.611472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.611505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.003 [2024-12-09 10:41:30.611746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.003 [2024-12-09 10:41:30.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.003 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.611961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.611999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.612192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.612223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.612439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.612472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.612658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.612689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.612869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.612903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.613026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.613058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.613316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.613347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.613540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.613572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.613920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.613953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.614197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.614230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.614477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.614508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.614750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.614782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.614981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.615014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.615198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.615230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.615382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.615414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.615673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.615705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.615852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.615884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.616018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.616050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.616208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.616359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.616390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.616584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.616833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.616867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.617072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.617104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.617308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.617339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.617532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.617565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.617801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.617853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.618056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.618088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.618299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.618331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.618567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.618599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.618844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.618879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.619065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.619097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.619286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.619318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.619663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.619956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.619988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.620183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.620214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.620437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.620589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.620621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.004 [2024-12-09 10:41:30.620738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.004 [2024-12-09 10:41:30.620770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.004 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.620928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.620963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.621188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.621221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.621419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.621452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.621654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.621687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.621867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.621900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.622087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.622119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.622250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.622282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.622502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.622534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.622840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.623020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.623217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.623250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.623645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.623677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.623868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.623901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.624059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.624091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.624235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.624266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.624551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.624583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.624726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.624758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.625014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.625048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.625165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.625198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.625457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.625489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.625730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.625763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.625912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.625945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.626127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.626159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.626358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.626391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.626585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.626616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.626880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.626914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.627115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.627147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.627331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.627363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.627628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.627666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.627889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.627921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.628414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.628599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.628631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.628836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.628869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.629159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.629191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.629350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.629382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.005 [2024-12-09 10:41:30.629575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.005 [2024-12-09 10:41:30.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.005 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.629794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.629835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.630020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.630052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.630200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.630232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.630499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.631040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.631073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.631274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.631507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.631538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.631757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.631789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.632036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.632218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.632250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.632456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.632488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.632757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.632789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.633046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.633078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.633226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.633256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.633432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.633702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.633734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.633944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.633977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.634106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.634283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.634316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.634648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.634680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.634924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.634957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.635274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.635407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.635439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.635574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.635605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.635872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.635906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.636104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.636136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.636334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.636367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.636706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.636739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.637002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.637036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.637263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.637295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.637561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.637602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.637878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.637913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.638210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.638243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.638528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.638561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.638843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.639059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.639091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.006 qpair failed and we were unable to recover it. 00:30:53.006 [2024-12-09 10:41:30.639316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.006 [2024-12-09 10:41:30.639348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.639545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.639577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.639756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.639787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.639997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.640029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.640220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.640252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.640484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.640516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.640786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.640843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.641055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.641088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.641285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.641318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.641594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.641626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.641828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.641861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.642097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.642130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.642274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.642306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.642532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.642565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.642860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.642895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.643032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.643064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.643317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.643349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.643554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.643586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.643778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.643819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.644281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.644479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.644511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.644755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.644789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.645012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.645045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.645236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.645269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.645405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.645436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.645613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.645646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.645937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.645970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.646226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.646558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.646590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.646865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.646900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.647106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.647138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.647323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.647355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.647641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.647783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.647829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.647969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.648001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.648259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.648292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.648554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.007 [2024-12-09 10:41:30.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.007 qpair failed and we were unable to recover it. 00:30:53.007 [2024-12-09 10:41:30.648886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.648920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.649112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.649143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.649407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.649440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.649685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.649717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.649937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.649971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.650116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.650148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.650349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.650381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.650609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.650641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.650781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.650824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.651090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.651122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.651327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.651359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.651633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.651665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.651852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.651885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.652034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.652066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.652260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.652293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.652482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.652515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.652762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.652794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.652931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.652963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.653093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.653125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.653333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.653365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.653637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.653669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.653858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.653891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.654025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.654057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.654292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.654325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.654647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.654680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.654987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.655021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.655169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.655202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.655406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.655438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.655639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.655670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.655870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.655905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.656204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.656237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.656502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.656534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.656801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.657062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.008 [2024-12-09 10:41:30.657095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.008 qpair failed and we were unable to recover it. 00:30:53.008 [2024-12-09 10:41:30.657292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.657324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.657538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.657570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.657769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.657807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.658039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.658271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.658304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.658521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.658553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.658802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.658846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.659055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.659288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.659321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.659670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.659863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.659899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.660047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.660078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.660274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.660307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.660532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.660564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.660836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.660870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.661151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.661183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.661419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.661452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.661667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.661699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.661990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.662024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.662176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.662208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.662360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.662393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.662863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.663105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.663138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.663314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.663346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.663485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.663517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.663768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.663801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.664104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.664137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.664364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.664396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.664713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.664892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.664926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.665067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.665099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.665254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.665286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.665597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.665630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.665912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.665946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.666123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.666338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.666371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.009 [2024-12-09 10:41:30.666663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.009 [2024-12-09 10:41:30.666696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.009 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.666892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.666926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.667218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.667453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.667485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.667757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.667789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.667954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.667993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.668218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.668250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.668494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.668745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.668777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.669000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.669034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.669186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.669218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.669467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.669500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.669719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.669752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.669978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.670012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.670158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.670190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.670500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.670533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.670733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.670765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.671005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.671040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.671315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.671606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.671922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.671956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.672212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.672245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.672529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.672562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.672777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.673001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.673033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.673235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.673267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.673600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.673894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.673927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.674075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.674107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.674331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.674363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.674608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.674884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.674917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.675129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.675162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.675362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.675394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.675608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.675640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.675854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.675889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.676108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.676140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.676357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.676390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.010 [2024-12-09 10:41:30.676610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.010 [2024-12-09 10:41:30.676642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.010 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.676949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.676983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.677195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.677228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.677348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.677380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.677660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.677693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.677827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.677861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.678117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.678150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.678431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.678471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.678743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.678775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.678951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.678984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.679189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.679222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.679361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.679393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.679684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.679718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.679937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.679971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.680183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.680370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.680403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.680516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.680548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.681188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.681222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.681421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.681454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.681736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.681770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.682004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.682038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.682236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.682269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.682426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.682459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.682749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.682782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.682995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.683027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.683213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.683245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.683832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.683865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.684021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.684053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.684252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.684284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.684658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.684690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.684956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.684990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.685145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.685177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.685461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.685493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.685626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.685659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.685887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.685921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.011 [2024-12-09 10:41:30.686064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.011 [2024-12-09 10:41:30.686095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.011 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.686298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.686329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.686557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.686589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.686847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.686881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.687010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.687200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.687232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.687492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.687526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.687672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.687704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.687984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.688019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.688229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.688268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.688563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.688596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.688862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.688896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.689055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.689088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.689364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.689398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.689639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.689671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.689918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.689952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.690106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.690138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.690291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.690323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.690559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.690591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.690790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.690831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.691013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.691045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.691185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.691219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.691401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.691432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.691638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.691670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.691956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.691991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.692148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.692180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.692506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.692540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.692849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.692883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.693020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.693052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.693201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.693233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.693455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.693487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.693740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.693773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.694052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.694085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.694238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.694271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.694612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.012 [2024-12-09 10:41:30.694644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.012 qpair failed and we were unable to recover it. 00:30:53.012 [2024-12-09 10:41:30.694773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.694804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.694982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.695015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.695158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.695191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.695342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.695374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.695654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.695688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.695913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.695946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.696100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.696134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.696462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.696494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.696756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.696789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.697052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.697085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.697228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.697261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.697400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.697432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.697638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.697671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.697877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.697911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.698036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.698074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.698269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.698300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.698548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.698581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.698888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.698921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.699150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.699182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.699330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.699362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.699621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.699654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.699931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.700120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.700153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.700348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.700382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.700698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.700907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.700940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.701130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.701164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.701359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.701391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.701598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.701631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.701937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.701971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.702103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.702135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.702324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.702356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.702593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.702625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.702877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.702912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.703108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.703140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.703344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.703376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.703514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.703545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.013 [2024-12-09 10:41:30.703735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.013 [2024-12-09 10:41:30.703767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.013 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.704018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.704276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.704308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.704627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.704659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.704790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.704835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.705104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.705136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.705287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.705319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.705642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.705675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.014 [2024-12-09 10:41:30.705829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.014 [2024-12-09 10:41:30.705863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.014 qpair failed and we were unable to recover it. 00:30:53.298 [2024-12-09 10:41:30.706119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.298 [2024-12-09 10:41:30.706151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.298 qpair failed and we were unable to recover it. 00:30:53.298 [2024-12-09 10:41:30.706309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.706342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.706659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.706691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.706948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.706981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.707179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.707355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.707386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.707662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.707693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.707894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.707928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.708133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.708172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.708304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.708336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.708625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.708659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.708894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.708928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.709153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.709185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.709484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.709515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.709725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.709758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.709977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.710011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.710207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.710238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.710390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.710421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.710551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.710583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.710779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.711046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.299 [2024-12-09 10:41:30.711079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.299 qpair failed and we were unable to recover it. 00:30:53.299 [2024-12-09 10:41:30.711354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.711386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.711641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.711675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.711905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.711940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.712080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.712114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.712309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.712342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.712624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.712657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.712937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.712970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.713227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.713260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.713458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.713492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.713709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.713965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.713999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.714255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.714629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.714907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.714941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.715138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cb20 is same with the state(6) to be set 00:30:53.300 [2024-12-09 10:41:30.715544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.715622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.715887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.715926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.716157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.716309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.716341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.716581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.716715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.716747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.716968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.717001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.717193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.717226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.717580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.717612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.717835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.717870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.717985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.718017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.718221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.718254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.718556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.718589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.718790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.718841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.719096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.300 [2024-12-09 10:41:30.719130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.300 qpair failed and we were unable to recover it. 00:30:53.300 [2024-12-09 10:41:30.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.719695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.719953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.719989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.720152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.720185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.720392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.720425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.720690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.720723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.720931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.720965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.721156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.721324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.721673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.721705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.721905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.721938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.722130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.722176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.722322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.301 [2024-12-09 10:41:30.722354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.301 qpair failed and we were unable to recover it. 00:30:53.301 [2024-12-09 10:41:30.722550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.722583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.722791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.722833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.723039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.723072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.723269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.723301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.723616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.723648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.723849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.723883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.724137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.724169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.724324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.724355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.724549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.724581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.724807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.724848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.725053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.725085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.725217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.725249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.725476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.725510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.725711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.725743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.725871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.302 [2024-12-09 10:41:30.725905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.302 qpair failed and we were unable to recover it. 00:30:53.302 [2024-12-09 10:41:30.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.306 [2024-12-09 10:41:30.726087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.306 qpair failed and we were unable to recover it. 00:30:53.306 [2024-12-09 10:41:30.726236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.726268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.726384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.726705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.726738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.726902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.726936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.727072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.727104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.727243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.727276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.727570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.727603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.727783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.727825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.728099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.728131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.307 [2024-12-09 10:41:30.728371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.307 [2024-12-09 10:41:30.728404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.307 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.728686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.728719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.728865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.728899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.729035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.729068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.729210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.729243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.729393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.729425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.729627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.729660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.729860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.729894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.730118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.730150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.308 qpair failed and we were unable to recover it. 00:30:53.308 [2024-12-09 10:41:30.730298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.308 [2024-12-09 10:41:30.730330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.730713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.730746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.730964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.730998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.731214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.731247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.731379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.731417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.731728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.731760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.731978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.309 [2024-12-09 10:41:30.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.309 qpair failed and we were unable to recover it. 00:30:53.309 [2024-12-09 10:41:30.732202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.310 [2024-12-09 10:41:30.732234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.310 qpair failed and we were unable to recover it. 00:30:53.310 [2024-12-09 10:41:30.732344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.310 [2024-12-09 10:41:30.732377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.310 qpair failed and we were unable to recover it. 00:30:53.310 [2024-12-09 10:41:30.732612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.310 [2024-12-09 10:41:30.732643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.310 qpair failed and we were unable to recover it. 00:30:53.310 [2024-12-09 10:41:30.732787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.310 [2024-12-09 10:41:30.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.310 qpair failed and we were unable to recover it. 00:30:53.310 [2024-12-09 10:41:30.733037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.314 [2024-12-09 10:41:30.733070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.314 qpair failed and we were unable to recover it. 00:30:53.314 [2024-12-09 10:41:30.733298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.733331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.733625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.733657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.733915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.733952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.734167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.734200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.734450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.734481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.734767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.734798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.735103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.315 [2024-12-09 10:41:30.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.315 qpair failed and we were unable to recover it. 00:30:53.315 [2024-12-09 10:41:30.735290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.735322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.735520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.735553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.735770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.735801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.735923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.735956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.736161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.736194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.736476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.736778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.737076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.737108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.737340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.737371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.737571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.737602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.737832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.737866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.738140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.738408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.738442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.738579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.738612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.738895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.738930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.739192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.739438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.739752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.739786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.739997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.740029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.740211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.740243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.740454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.740486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.740772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.316 [2024-12-09 10:41:30.740804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.316 qpair failed and we were unable to recover it. 00:30:53.316 [2024-12-09 10:41:30.741019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.741052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.741358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.741390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.741600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.741633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.741824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.741865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.742079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.742112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.742304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.742337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.742563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.742595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.742891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.742925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.743187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.743219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.743344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.743376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.743585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.743617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.743757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.743789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.744001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.744033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.744283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.744315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.744594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.744876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.744910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.745095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.745127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.745411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.745444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.745753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.745785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.746017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.746050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.746201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.746233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.746350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.746383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.746589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.746620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.746899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.746933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.747186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.747219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.747432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.747704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.747735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.747942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.747976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.748168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.748199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.748503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.748693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.748726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.749046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.749080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.749220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.749251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.749471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.749504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.749801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.749861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.750156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.750188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.750457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.750489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.750735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.751031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.751064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.751275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.751308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.751605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.751636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.751828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.751862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.317 qpair failed and we were unable to recover it. 00:30:53.317 [2024-12-09 10:41:30.752137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.317 [2024-12-09 10:41:30.752169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.752295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.752333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.752651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.752683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.752982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.753016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.753310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.753342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.753594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.753626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.753938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.753972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.754180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.754212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.754359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.754391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.754661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.754693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.754937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.754970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.755198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.755231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.755431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.755464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.755745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.755777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.755952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.756141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.756175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.756506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.756538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.756679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.756711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.756965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.757000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.757186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.757219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.757420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.757451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.757643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.757857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.757890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.758093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.758125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.758311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.758343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.758512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.758544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.758746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.758778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.759014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.759049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.759313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.759346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.759558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.759590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.759858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.759891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.760096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.760129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.760312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.760345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.760617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.760649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.760873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.760908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.761110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.761142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.761276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.761307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.761573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.761605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.761883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.761916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.762069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.318 [2024-12-09 10:41:30.762101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.318 qpair failed and we were unable to recover it. 00:30:53.318 [2024-12-09 10:41:30.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.762336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.762558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.762863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.762895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.763106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.763139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.763358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.763391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.763644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.763677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.763947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.763982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.764166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.764197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.764390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.764697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.322 [2024-12-09 10:41:30.764730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.322 qpair failed and we were unable to recover it. 00:30:53.322 [2024-12-09 10:41:30.764958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.764991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.765179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.765211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.765512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.765544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.765751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.765783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.766100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.766132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.766300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.766333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.766534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.766566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.766877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.767018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.767049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.767273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.767305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.767439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.767470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.767732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.767763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.768035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.768067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.768301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.768517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.768549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.768864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.768897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.769050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.769307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.769493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.769524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.769794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.769838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.770047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.770079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.770229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.770260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.770543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.770576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.770794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.770846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.771056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.771088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.771363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.771396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.771658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.771690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.771921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.771956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.772107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.772140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.772414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.772445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.772703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.772735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.772883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.772928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.773118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.773150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.773356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.773642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.773674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.773876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.773910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.774058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.774089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.774226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.774257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.774475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.774507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.774738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.774769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.774947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.774980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.775257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.775288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.775496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.775528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.775800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.775844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.775988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.776279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.776311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.323 qpair failed and we were unable to recover it. 00:30:53.323 [2024-12-09 10:41:30.776568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.323 [2024-12-09 10:41:30.776600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.776908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.776943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.777218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.777456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.777488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.777763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.777796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.778001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.778033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.778353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.778574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.778606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.778827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.778860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.779014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.779046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.779252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.779284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.779658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.779692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.779969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.780004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.780200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.780232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.780376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.780407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.780639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.780827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.780861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.781014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.781045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.781249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.781281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.781429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.781461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.781672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.781704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.781910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.781945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.782268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.782299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.782582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.782615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.782846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.782881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.783027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.783065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.783271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.783302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.783461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.783682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.784043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.784078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.784290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.784323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.784641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.784674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.784953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.784987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.785098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.785129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.785268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.785299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.785422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.785454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.785580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.785612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.785918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.785953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.786139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.786172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.786375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.786408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.786705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.786737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.786879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.786913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.787100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.787354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.787386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.787659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.787899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.787934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.788138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.788171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.788337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.788369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.788659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.788691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.324 [2024-12-09 10:41:30.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.324 [2024-12-09 10:41:30.788945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.324 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.789240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.789271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.789559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.789591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.789743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.789776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.789955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.789988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.790185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.790218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.790428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.790461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.790647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.790679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.790913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.790948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.791184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.791218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.791360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.791644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.791679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.791933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.791968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.792117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.792150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.792357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.792391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.792629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.792662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.792920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.792961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.793083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.793115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.793313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.793345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.793559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.793591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.793783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.793825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.793954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.793987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.794221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.794253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.794476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.794510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.794730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.794761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.795005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.795038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.795152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.795184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.795398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.795430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.795671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.795918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.795951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.796187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.796219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.796430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.796463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.796736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.796769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.797063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.797097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.797338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.797689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.797722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.797922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.797956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.798159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.798190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.798457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.798490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.798681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.798990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.799024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.799224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.799256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.799543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.325 [2024-12-09 10:41:30.799575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.325 qpair failed and we were unable to recover it. 00:30:53.325 [2024-12-09 10:41:30.799837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.799919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.800201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.800238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.800464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.800498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.800619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.800652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.800931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.800966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.801254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.801287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.801522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.801555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.801763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.801796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.802011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.802248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.802281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.802497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.802530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.802675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.802707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.802957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.802992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.803199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.803241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.803440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.803472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.803692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.803724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.803939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.803973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.804110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.804143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.804381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.804414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.804600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.804891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.804926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.805063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.805096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.805218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.805250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.805531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.805564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.805711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.805744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.805975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.806010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.806231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.806264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.806543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.806577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.806865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.806900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.807204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.807237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.807398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.807429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.807644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.807926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.807960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.808107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.808140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.808292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.808325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.808590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.808890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.808924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.809074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.809258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.809291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.809577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.809609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.809828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.809863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.810017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.810050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.810233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.810266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.810545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.810578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.326 [2024-12-09 10:41:30.810774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.326 [2024-12-09 10:41:30.810817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.326 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.811025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.811058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.811313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.811345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.811640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.811671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.811917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.811951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.812090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.812122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.812244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.812276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.812580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.812612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.812882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.812917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.813130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.813171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.813374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.813408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.813721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.813754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.813921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.813953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.814118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.814151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.814272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.814305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.814601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.814837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.814873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.815079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.815114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.815270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.815302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.815569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.815843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.815878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.816088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.816121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.816261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.816293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.816514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.816547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.816744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.816776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.816958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.816992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.817196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.817228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.817595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.817628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.817847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.817882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.818115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.818149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.818343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.818375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.818599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.818632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.818835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.818869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.819016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.819205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.819237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.819381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.819413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.819713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.819752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.820024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.820058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.820272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.820305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.820517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.820550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.820746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.820778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.821001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.821265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.821298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.821541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.821573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.821712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.821746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.821984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.822017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.822134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.822167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.822308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.822341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.822557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.822590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.822805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.822866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.823013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.823046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.823165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.823198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.823347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.327 [2024-12-09 10:41:30.823379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.327 qpair failed and we were unable to recover it. 00:30:53.327 [2024-12-09 10:41:30.823562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.823595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.823857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.824054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.824085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.824236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.824268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.824513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.824545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.824794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.824835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.824986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.825019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.825302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.825335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.825515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.825547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.825828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.825862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.826029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.826063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.826213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.826246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.826507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.826538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.826679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.826711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.826925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.826959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.827177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.827210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.827442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.827618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.827650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.827948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.827982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.828245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.828277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.828431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.828463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.828771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.828803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.828951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.828983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.829160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.829198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.829404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.829437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.829570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.829602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.829831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.829864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.830118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.830151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.830282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.830313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.830605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.830636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.830935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.830969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.831103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.831135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.831336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.831369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.831507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.831540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.831831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.831865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.832052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.832083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.832216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.832247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.832457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.832491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.832721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.832753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.832894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.832927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.833085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.833119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.833395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.833427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.833628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.833660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.833933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.833968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.834281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.834313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.834556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.834588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.834907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.834941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.835140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.835173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.835456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.835488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.835613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.835645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.835965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.836000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.836255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.836288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.836510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.836543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.836746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.836778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.836991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.837068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.837311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.837347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.837586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.837621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.837889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.837925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.838121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.838153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.838299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.838332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.838603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.838635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.838833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.838868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.839063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.839095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.839249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.839487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.839520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.839836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.839871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.840104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.840137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.328 qpair failed and we were unable to recover it. 00:30:53.328 [2024-12-09 10:41:30.840276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.328 [2024-12-09 10:41:30.840310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.840561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.840593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.840784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.840830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.841037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.841070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.841217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.841249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.841402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.841434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.841632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.841665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.841939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.841972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.842113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.842145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.842297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.842330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.842626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.842658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.842793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.842838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.843063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.843217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.843249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.843393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.843426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.843693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.843725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.844001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.844035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.844314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.844347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.844583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.844615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.844827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.844860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.845062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.845095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.845326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.845359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.845625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.845659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.845791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.845832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.846038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.846071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.846212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.846244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.846442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.846475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.846606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.846639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.846860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.846893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.847037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.847069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.847191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.847223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.847425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.847457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.847655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.847689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.847946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.847981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.848110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.848143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.848301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.848332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.848467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.848500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.848679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.848877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.848910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.849110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.849341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.849372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.849508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.849539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.849763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.849797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.850069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.850103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.850239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.850271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.850466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.850499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.850633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.850665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.850870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.850905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.851032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.851064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.851316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.851348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.851552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.851714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.851745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.851882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.851914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.852066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.852098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.852228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.852259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.852456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.852487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.852697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.852728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.852956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.853164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.853304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.853337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.853482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.853514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.853697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.853728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.853966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.853999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.854255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.854288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.854409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.854448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.854575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.854805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.855018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.855049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.855187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.855219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.855338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.855370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.855506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.855791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.855837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.856037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.856071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.329 [2024-12-09 10:41:30.856310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.329 [2024-12-09 10:41:30.856343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.329 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.856599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.856785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.856833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.856958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.856991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.857116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.857148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.857356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.857389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.857574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.857606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.857721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.857753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.857946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.857979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.858088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.858119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.858429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.858461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.858715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.858747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.858984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.859017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.859234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.859267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.859449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.859482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.859696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.859729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.859855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.859889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.860094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.860125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.860249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.860286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.860516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.860547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.860767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.860986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.861018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.861203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.861236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.861457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.861489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.861685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.861716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.861858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.861892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.862880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.863036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.863068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.863271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.863303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.863437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.863469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.863593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.863624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.863834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.863869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.864049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.864082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.864278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.864310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.864515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.864547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.864826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.864860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.865056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.865088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.865224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.865255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.865456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.865489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.865684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.865922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.865972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.866170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.866203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.866383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.866416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.866542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.866574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.866733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.866916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.866951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.867157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.867189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.867388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.867421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.867550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.867581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.867800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.867847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.868069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.868100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.868231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.868263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.868556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.868588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.868733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.868765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.869009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.869043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.869181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.869211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.869395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.869426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.869601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.869631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.869837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.869872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.870056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.870089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.870207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.870249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.870451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.870483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.870610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.870642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.870798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.871060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.871094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.871369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.871403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.330 qpair failed and we were unable to recover it. 00:30:53.330 [2024-12-09 10:41:30.871605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.330 [2024-12-09 10:41:30.871637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.871855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.871889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.872068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.872273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.872305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.872557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.872590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.872718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.872750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.872955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.872990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.873169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.873202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.873318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.873351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.873478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.873509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.873623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.873837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.873871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.874007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.874039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.874178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.874210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.874479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.874512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.874656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.874688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.874906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.874939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.875110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.875277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.875440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.875656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.875874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.875998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.876231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.876469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.876636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.876816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.876958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.876991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.877249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.877281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.877546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.877578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.877761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.877794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.878003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.878036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.878216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.878250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.878435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.878466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.878608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.878639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.878780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.878838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.879064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.879096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.879224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.879256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.879396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.879427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.879627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.879660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.879984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.880017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.880309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.880343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.880621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.880660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.880924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.880956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.881146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.881179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.881310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.881342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.881548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.881581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.881774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.881806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.882037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.882256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.882685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.882852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.882984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.883015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.883207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.883241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.883441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.883472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.883656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.883689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.883828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.883860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.883990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.884021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.884143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.331 [2024-12-09 10:41:30.884175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.331 qpair failed and we were unable to recover it. 00:30:53.331 [2024-12-09 10:41:30.884378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.884639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.884673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.884785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.884841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.884983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.885014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.885224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.885257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.885472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.885604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.885636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.885912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.885947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.886955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.886988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.887095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.887125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.887299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.887555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.887589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.887785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.887829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.888829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.888863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.889951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.890132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.890165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.890435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.890467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.890662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.890695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.890824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.890858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.891079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.891248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.891482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.891646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.891797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.891991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.892238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.892413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.892559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.892702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.892909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.892943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.893097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.893306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.893450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.893591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.893731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.893985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.894060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.894265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.894300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.894421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.894454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.894570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.894855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.894890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.895958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.895994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.896129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.896161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.896390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.896421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.896601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.896781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.896824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.897030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.897245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.897279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.897409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.897441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.897549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.897582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.897769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.332 [2024-12-09 10:41:30.897801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.332 qpair failed and we were unable to recover it. 00:30:53.332 [2024-12-09 10:41:30.898111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.898144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.898338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.898370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.898565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.898598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.898849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.898883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.899012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.899044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.899163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.899194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.899351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.899383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.899665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.899850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.899885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.900856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.900890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.901915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.901949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.902089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.902242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.902397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.902550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.902776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.902979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.903138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.903350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.903498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.903702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.903869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.903901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.904952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.904983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.905922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.905953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.906082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.906114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.906341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.906374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.906561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.906593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.906730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.906762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.333 [2024-12-09 10:41:30.907100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.333 [2024-12-09 10:41:30.907132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.333 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.907254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.907286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.907408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.907439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.907615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.907646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.907841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.907875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.908915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.908948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.909081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.909114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.909219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.909251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.909518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.909551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.909667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.909699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.909828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.909862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.910093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.910125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.910247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.910279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.910477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.910509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.910699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.910731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.910873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.911096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.911303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.911462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.911676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.911850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.911969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.912194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.912338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.912589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.912739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.912899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.912933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.913132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.913163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.913297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.913329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.913452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.913483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.913667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.913699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.913894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.913929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.914040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.914071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.914258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.914291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.914474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.914506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.914687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.914719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.914897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.915911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.915943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.916187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.916218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.916407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.916439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.916594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.916787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.916830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.916960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.916993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.917209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.917320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.917351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.917534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.917565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.334 [2024-12-09 10:41:30.917747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.334 [2024-12-09 10:41:30.917779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.334 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.918904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.918938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.919180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.919211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.919335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.919372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.919487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.919519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.919734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.919765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.919891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.919924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.920110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.920142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.920445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.920564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.920595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.920825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.920859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.921067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.921276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.921506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.921722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.921994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.922209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.922414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.922555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.922720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.922936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.922968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.923101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.923132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.923263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.923294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.923415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.923447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.923628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.923659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.923866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.923900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.924099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.924131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.924250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.924283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.924521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.924554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.924770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.924821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.924948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.924980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.925094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.925126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.925249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.925281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.925413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.925445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.925568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.925600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.925845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.925879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.926059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.926090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.926262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.926294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.926643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.926676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.926856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.926888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.927031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.927063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.927173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.927204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.927452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.927484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.927752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.927784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.335 [2024-12-09 10:41:30.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.335 [2024-12-09 10:41:30.928937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.335 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.929047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.929078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.929278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.929310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.929500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.929532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.929671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.929702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.929943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.929978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.930197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.930356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.930577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.930719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.930868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.930991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.931212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.931361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.931576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.931785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.931950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.931981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.932163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.932195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.932387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.932419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.932547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.932585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.932712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.932744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.932875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.932908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.933963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.933995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.934115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.934147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.934331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.934541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.934573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.934692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.934723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.934917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.934952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.935076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.935108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.935326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.935358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.935550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.935582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.935768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.935799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.935940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.935973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.936170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.936202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.936327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.936358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.936555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.936587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.936728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.936836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.936869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.937045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.937077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.937197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.937354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.937507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.937540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.937743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.937779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.938076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.938159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.938374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.938410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.938601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.938763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.939028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.336 [2024-12-09 10:41:30.939059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.336 qpair failed and we were unable to recover it. 00:30:53.336 [2024-12-09 10:41:30.939178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.939397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.939428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.939550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.939582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.939699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.939730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.939926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.939960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.940251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.940497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.940537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.940735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.940894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.940927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.941115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.941149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.941334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.941366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.941552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.941584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.941834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.941869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.941983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.942014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.942199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.942231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.942469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.942501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.942623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.942654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.942791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.942835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.943092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.943124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.943306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.943338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.943527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.943559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.943675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.943706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.943882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.943915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.944947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.945881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.945914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.946948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.946980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.947093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.947124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.947297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.947328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.947538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.947569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.947751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.947783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.948044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.948076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.948318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.948356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.948459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.948490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.948610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.948642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.948884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.948918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.949829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.337 [2024-12-09 10:41:30.949862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.337 qpair failed and we were unable to recover it. 00:30:53.337 [2024-12-09 10:41:30.950064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.950096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.950206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.950237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.950405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.950437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.950636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.950667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.950845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.950878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.951053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.951190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.951221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.951391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.951423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.951633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.951845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.951876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.952912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.952945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.953121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.953152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.953291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.953323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.953589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.953741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.953773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.953918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.953951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.954135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.954168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.954363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.954394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.954607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.954639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.954862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.955102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.955134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.955306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.955543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.955715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.955746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.955880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.955913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.956103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.956146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.956268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.956299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.956559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.956591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.956702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.956733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.956976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.957807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.957994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.958027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.958265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.958296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.958471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.958503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.958700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.958730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.958902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.958949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.959122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.959154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.959363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.959394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.959585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.959617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.959860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.959893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.960142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.960174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.960290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.960322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.960468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.960658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.960690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.960945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.960978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.961114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.961145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.961382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.961413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.338 qpair failed and we were unable to recover it. 00:30:53.338 [2024-12-09 10:41:30.961518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.338 [2024-12-09 10:41:30.961549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.961824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.961857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.962100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.962132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.962336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.962441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.962473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.962645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.962683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.962820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.962853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.963835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.963867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.964950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.964984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.965091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.965251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.965283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.965491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.965522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.965645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.965677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.965887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.966023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.966054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.966159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.966191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.966459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.966490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.966672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.966703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.966973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.967007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.967191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.967223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.967342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.967373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.967610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.967642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.967845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.967879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.968130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.968162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.968351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.968533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.968793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.968848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.968984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.969328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.969489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.969637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.969867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.969900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.970057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.970301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.970332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.970612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.970643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.970925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.970957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.971141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.971172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.971411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.971442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.971677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.971708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.971820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.971852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.972048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.972079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.339 qpair failed and we were unable to recover it. 00:30:53.339 [2024-12-09 10:41:30.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.339 [2024-12-09 10:41:30.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.972536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.972568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.972685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.972722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.972847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.972881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.973086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.973117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.973317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.973440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.973472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.973672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.973910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.973944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.974066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.974096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.974221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.974252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.974374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.974406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.974672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.974703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.974835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.974868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.975048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.975081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.975185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.975215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.975465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.975497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.975678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.975709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.975824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.975858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.976045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.976076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.976253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.976285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.976544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.976748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.976779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.976901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.976934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.977217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.977248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.977436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.977467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.977648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.977679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.977886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.977920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.978040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.978072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.978284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.978316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.978584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.978789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.978831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.979004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.979035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.979170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.979203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.979374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.979406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.979667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.979699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.980000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.980188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.980220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.980469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.980500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.980690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.980722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.980906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.980939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.981141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.981172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.981301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.981338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.981579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.981612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.981795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.981839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.981963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.981995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.982181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.982213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.982416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.982447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.982630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.982661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.982874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.982908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.983080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.983112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.983348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.983379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.983565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.983597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.983733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.983764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.983889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.983922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.984114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.984146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.340 qpair failed and we were unable to recover it. 00:30:53.340 [2024-12-09 10:41:30.984283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.340 [2024-12-09 10:41:30.984315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.984551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.984709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.984741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.984940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.984972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.985167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.985199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.985373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.985404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.985623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.985740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.985771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.985893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.985926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.986045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.986076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.986345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.986376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.986511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.986543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.986653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.986685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.986878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.986911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.987169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.987201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.987414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.987446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.987687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.987718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.987902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.987935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.988180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.988211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.988348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.988379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.988589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.988620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.988791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.988832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.989076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.989226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.989364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.989582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.989786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.989970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.990003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.990242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.990273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.990449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.990480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.990600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.990831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.990864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.990988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.991021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.991205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.991236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.991425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.991458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.991725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.991757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.991954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.991986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.992157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.992188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.992392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.992423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.992680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.992712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.992969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.993132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.993293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.993510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.993724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.993898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.993931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.994963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.994996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.341 [2024-12-09 10:41:30.995258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.341 [2024-12-09 10:41:30.995290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.341 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.995433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.995465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.995649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.995860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.995894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.996068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.996099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.996283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.996315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.996435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.996466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.996755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.996786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.996973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.997006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.997184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.997215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.997508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.997639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.997671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.997802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.997845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.998014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.998216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.998252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.998457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.998489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.998613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.998644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.998766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.998798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.999028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.999060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.999240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.999271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.999439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.999471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.999575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.999606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:30.999795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:30.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.617 qpair failed and we were unable to recover it. 00:30:53.617 [2024-12-09 10:41:31.000077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.617 [2024-12-09 10:41:31.000109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.000283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.000315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.000432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.000464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.000652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.000683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.000822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.001036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.001067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.001273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.001305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.001571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.001603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.001846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.002018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.002049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.002170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.002202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.002467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.002700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.002731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.003052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.003367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.003586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.003742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.003993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.004065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.004291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.004327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.004444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.004719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.004751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.004934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.004968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.005096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.005128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.005249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.005280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.005398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.005429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.005565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.005835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.005870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.006040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.006072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.006259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.006291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.006480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.006513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.006621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.006663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.006852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.006884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.007003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.007035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.007317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.007349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.007520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.007551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.007745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.007777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.007911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.008054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.008087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.008339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.008371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.008614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.008646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.008782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.008824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.009020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.009052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.009248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.009449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.009481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.009606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.009638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.009824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.009858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.010106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.010138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.010319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.618 [2024-12-09 10:41:31.010351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.618 qpair failed and we were unable to recover it. 00:30:53.618 [2024-12-09 10:41:31.010463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.010494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.010609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.010641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.010863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.011110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.011142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.011337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.011368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.011494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.011525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.011711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.011742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.011850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.011899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.012081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.012113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.012299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.012371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.012659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.012695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.012886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.013125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.013157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.013425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.013610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.013641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.013821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.013854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.013970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.014002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.014271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.014304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.014439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.014471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.014654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.014687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.014948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.014982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.015179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.015211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.015399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.015441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.015706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.015737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.015922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.015956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.016078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.016110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.016263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.016387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.016418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.016611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.016853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.016887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.017053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.017214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.017246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.017515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.017548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.017822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.017855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.018096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.018424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.018455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.018708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.018740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.018855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.018888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.019036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.019278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.019639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.019862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.019985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.020016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.020282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.020314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.020444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.020476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.020677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.020708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.020899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.020932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.021121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.021153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.619 [2024-12-09 10:41:31.021343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.619 [2024-12-09 10:41:31.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.619 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.021490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.021521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.021649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.021681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.021895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.022083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.022114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.022294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.022531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.022562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.022772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.022804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.023936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.023974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.024182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.024214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.024452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.024484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.024587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.024619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.024854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.024889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.025927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.025959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.026078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.026301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.026333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.026516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.026547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.026738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.026770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.026910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.026943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.027169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.027201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.027438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.027470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.027707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.027739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.027915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.027949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.028073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.028105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.028232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.028264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.028461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.028493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.028689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.028721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.028843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.028878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.029118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.029149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.029352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.029384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.029488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.029525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.029730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.029761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.029952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.029986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.030100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.030132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.030339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.030370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.030574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.030606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.030862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.030896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.031080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.031112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.620 [2024-12-09 10:41:31.031321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.620 [2024-12-09 10:41:31.031354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.620 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.031612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.031644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.031841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.031875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.032143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.032175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.032413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.032446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.032632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.032664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.032885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.032919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.033140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.033320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.033351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.033520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.033552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.033735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.033766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.033909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.034156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.034187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.034434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.034466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.034592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.034624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.034816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.034849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.035049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.035262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.035294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.035465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.035498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.035689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.035721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.035903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.035936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.036064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.036096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.036288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.036320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.036506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.036538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.036773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.036806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.037017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.037049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.037155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.037188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.037368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.037399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.037585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.037617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.037791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.037833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.038019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.038051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.038177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.038208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.038389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.038427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.038619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.038651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.038842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.038875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.039076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.039108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.039286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.039318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.039501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.039533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.039711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.039743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.039872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.039905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.040114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.040146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.040322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.040354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.040466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.040498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.040719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.040833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.040867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.041010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.041042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.041151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.041183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.041393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.041425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.041713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.041744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.621 qpair failed and we were unable to recover it. 00:30:53.621 [2024-12-09 10:41:31.042009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.621 [2024-12-09 10:41:31.042042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.042183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.042215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.042455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.042668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.042700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.042884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.042918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.043046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.043078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.043272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.043304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.043541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.043573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.043761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.043793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.043984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.044161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.044194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.044326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.044357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.044543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.044575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.044781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.044843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.045053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.045086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.045319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.045351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.045471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.045503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.045730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.045947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.046121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.046154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.046338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.046370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.046560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.046591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.046833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.046867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.047051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.047090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.047330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.047361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.047604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.047840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.048048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.048079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.048262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.048295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.048486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.048517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.048708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.048740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.048927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.048960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.049200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.049232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.049417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.049449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.049679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.049859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.049892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.049996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.050028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.050225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.050258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.050464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.050496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.050611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.050829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.050864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.050977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.051120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.051293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.051533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.051733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.051951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.051983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.052190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.052222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.052391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.052423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.052591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.052623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.052873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.053146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.053178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.622 qpair failed and we were unable to recover it. 00:30:53.622 [2024-12-09 10:41:31.053357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.622 [2024-12-09 10:41:31.053390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.053584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.053616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.053837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.054052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.054085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.054373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.054405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.054700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.054733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.054921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.054955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.055214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.055246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.055434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.055465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.055594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.055625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.055831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.055864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.056065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.056102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.056289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.056320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.056487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.056644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.056675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.056872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.056905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.057021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.057052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.057169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.057201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.057486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.057518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.057656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.057688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.057828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.057862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.058046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.058078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.058295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.058327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.058452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.058484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.058723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.058755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.058878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.058912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.059187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.059219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.059432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.059616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.059829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.059864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.060121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.060153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.060268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.060299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.060474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.060506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.060742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.060774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.060964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.060996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.061128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.061160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.061285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.061316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.061513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.061545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.061766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.062050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.062083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.062282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.062314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.062493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.062524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.062694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.062726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.062915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.062969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.063239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.063272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.063517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.063549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.063741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.063773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.063955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.063988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.064174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.064206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.064396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.064567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.064599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.623 [2024-12-09 10:41:31.064765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.623 [2024-12-09 10:41:31.064804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.623 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.064926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.065209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.065241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.065474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.065654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.065687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.065864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.065898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.066138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.066170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.066430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.066462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.066640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.066673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.066945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.067153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.067184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.067366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.067398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.067576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.067607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.067793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.067833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.068025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.068058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.068160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.068193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.068398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.068431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.068549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.068581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.068821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.068854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.069061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.069294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.069440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.069587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.069789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.069996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.070221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.070502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.070535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.070692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.070877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.070911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.071050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.071082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.071275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.071308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.071496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.071530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.071669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.071701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.071894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.071931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.072917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.072951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.073202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.073241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.073461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.073702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.074032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.074066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.074195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.074227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.074396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.074428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.074617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.624 [2024-12-09 10:41:31.074649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.624 qpair failed and we were unable to recover it. 00:30:53.624 [2024-12-09 10:41:31.074781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.074823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.074928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.074960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.075145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.075178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.075402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.075435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.075831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.075867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.075983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.076015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.076286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.076319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.076498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.076529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.076698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.076730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.076837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.076872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.077060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.077092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.077262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.077295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.077483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.077516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.077658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.077692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.077840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.077874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.078117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.078333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.078548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.078708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.078873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.078996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.079227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.079375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.079677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.079900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.079934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.080073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.080312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.080345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.080481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.080514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.080794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.080838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.080964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.080996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.081107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.081140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.081331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.081369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.081490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.081523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.081640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.081674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.081852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.081887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.082127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.082159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.082348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.082380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.082575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.082608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.082789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.082829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.082952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.082984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.083161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.083195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.083671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.083704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.083831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.083865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.083981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.084014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.084213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.084246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.084433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.084466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.084582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.084614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.084729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.625 [2024-12-09 10:41:31.084762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.625 qpair failed and we were unable to recover it. 00:30:53.625 [2024-12-09 10:41:31.085035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.085181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.085343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.085550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.085690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.085855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.085891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.086013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.086046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.086222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.086254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.086434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.086467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.086672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.086903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.086938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.087077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.087264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.087296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.087508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.087542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.087720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.087754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.087955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.087989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.088105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.088137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.088319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.088352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.088470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.088502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.088623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.088655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.088850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.088883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.089966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.089999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.090119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.090152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.090270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.090303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.090536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.090706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.090739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.090862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.090896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.091945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.091978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.092099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.092131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.092306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.092525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.092557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.092761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.092956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.092990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.093123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.093155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.093329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.093360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.093483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.093515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.093706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.093738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.093914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.093948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.094076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.094284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.094354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.094496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.094532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.626 [2024-12-09 10:41:31.094743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.626 [2024-12-09 10:41:31.094776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.626 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.094923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.094958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.095200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.095232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.095349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.095381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.095553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.095585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.095686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.095718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.095836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.096060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.096092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.096216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.096248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.096377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.096408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.096653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.096685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.096926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.096965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.097140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.097172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.097349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.097381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.097593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.097626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.097759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.097927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.097960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.098133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.098165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.098281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.098313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.098436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.098467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.098584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.098616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.098800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.098843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.099015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.099047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.099304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.099337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.099452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.099483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.099711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.099742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.099873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.100126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.100270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.100407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.100631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.100847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.100975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.101129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.101285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.101493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.101630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.101878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.102129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.102198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.102369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.102437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.102562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.102599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.102779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.102822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.102944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.102977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.103091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.103123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.103332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.103364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.103482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.103720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.627 [2024-12-09 10:41:31.103753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.627 qpair failed and we were unable to recover it. 00:30:53.627 [2024-12-09 10:41:31.103879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.103914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.104953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.104986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.105092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.105124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.105340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.105372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.105540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.105572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.105755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.105788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.105977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.106011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.106206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.106239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.106416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.106448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.106564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.106597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.106820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.106854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.107035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.107068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.107313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.107347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.107526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.107558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.107787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.107829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.107953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.108122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.108154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.108343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.108376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.108556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.108588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.108767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.108800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.108924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.108956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.109063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.109095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.109297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.109328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.109451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.109483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.109692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.109725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.109932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.109975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.110204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.110237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.110438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.110471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.110607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.110780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.110825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.111001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.111033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.111274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.111305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.111507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.111538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.111796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.111841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.111963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.111995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.112257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.112289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.112474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.112506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.112630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.112662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.112895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.112929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.113059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.113091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.113290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.113470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.113502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.113738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.113769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.113899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.113932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.114101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.114132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.628 [2024-12-09 10:41:31.114310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.628 [2024-12-09 10:41:31.114342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.628 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.114509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.114540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.114787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.114830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.114956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.114987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.115233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.115265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.115388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.115419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.115616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.115648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.115846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.115885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.116889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.116923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.117061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.117094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.117202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.117233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.117405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.117437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.117614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.117645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.117892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.117926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.118050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.118081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.118308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.118340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.118559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.118592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.118766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.118798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.118997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.119028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.119154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.119186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.119368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.119399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.119566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.119597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.119711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.119742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.120008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.120042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.120264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.120447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.120479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.120651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.120683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.120822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.120854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.121078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.121289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.121438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.121601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.121872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.121991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.122209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.122356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.122569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.122701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.122919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.122953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.123137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.123169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.123344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.123376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.123494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.123526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.123761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.123934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.124172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.124204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.124374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.124406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.124699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.124731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.124989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.629 [2024-12-09 10:41:31.125023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.629 qpair failed and we were unable to recover it. 00:30:53.629 [2024-12-09 10:41:31.125138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.125170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.125356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.125388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.125536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.125568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.125733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.125765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.125896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.125929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.126114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.126145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.126268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.126299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.126586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.126618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.126831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.126870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.127918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.127950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.128158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.128190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.128358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.128391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.128519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.128551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.128760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.128792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.128932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.128965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.129202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.129233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.129402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.129433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.129616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.129648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.129747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.129778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.129982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.130054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.130230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.130335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.130368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.130626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.130658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.130784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.130829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.131936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.131966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.132274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.132317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.132511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.132668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.132833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.132865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.133079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.133291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.133428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.133581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.133747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.133998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.134204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.134364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.134517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.134721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.134942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.134975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.135090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.135121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.630 [2024-12-09 10:41:31.135250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.630 [2024-12-09 10:41:31.135282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.630 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.135406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.135437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.135699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.135731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.135965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.135998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.136119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.136150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.136360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.136392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.136674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.136705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.136893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.136926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.137176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.137207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.137320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.137351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.137566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.137597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.137841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.137874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.138056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.138088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.138340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.138371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.138557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.138590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.138758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.138790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.138970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.139002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.139208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.139320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.139352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.139534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.139566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.139822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.139855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.140035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.140067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.140189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.140220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.140406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.140438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.140607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.140655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.140774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.141055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.141086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.141256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.141287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.141528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.141784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.141830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.142066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.142098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.142287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.142318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.142441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.142472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.142641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.142672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.142928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.142961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.143149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.143298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.143329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.143603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.143634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.143828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.143861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.144123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.144154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.144415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.144446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.144626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.144657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.144839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.144872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.144999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.145031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.145152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.145183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.145349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.145380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.145565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.631 [2024-12-09 10:41:31.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.631 qpair failed and we were unable to recover it. 00:30:53.631 [2024-12-09 10:41:31.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.146012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.146045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.146235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.146266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.146461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.146493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.146789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.146834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.147072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.147104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.147272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.147304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.147499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.147531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.147661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.147692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.147929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.147963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.148102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.148134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.148384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.148415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.148651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.148683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.148862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.148895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.149011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.149043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.149284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.149315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.149550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.149582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.149764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.149802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.150133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.150164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.150355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.150386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.150570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.150602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.150804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.151001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.151033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.151293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.151325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.151559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.151590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.151781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.151823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.152048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.152080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.152185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.152217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.152397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.152429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.152555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.152587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.152798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.152841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.153045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.153288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.153603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.153851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.153973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.154005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.154180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.154212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.154405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.154437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.154624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.154656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.154892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.154925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.155188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.155344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.155376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.155574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.155606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.155791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.155833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.156007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.156039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.156221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.156252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.156434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.156466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.156700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.156732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.156907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.156939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.157127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.632 [2024-12-09 10:41:31.157158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.632 qpair failed and we were unable to recover it. 00:30:53.632 [2024-12-09 10:41:31.157400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.157433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.157605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.157636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.157764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.157796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.157999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.158031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.158291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.158322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.158438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.158470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.158583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.158615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.158832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.158865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.159070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.159102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.159227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.159258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.159455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.159486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.159676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.159908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.159942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.160075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.160107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.160368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.160399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.160514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.160545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.160729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.160760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.160939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.160972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.161071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.161102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.161317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.161349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.161499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.161681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.161714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.161900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.161933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.162120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.162152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.162412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.162444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.162622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.162654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.162769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.162800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.162923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.162955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.163960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.163994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.164195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.164226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.164341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.164373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.164491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.164524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.164712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.164744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.164861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.164895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.165060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.165235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.165267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.165458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.165489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.165750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.165781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.165978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.166011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.166181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.166212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.166414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.166445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.166626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.166658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.166848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.166881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.167014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.167048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.167287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.167320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.167525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.167556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.167724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.633 [2024-12-09 10:41:31.167755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.633 qpair failed and we were unable to recover it. 00:30:53.633 [2024-12-09 10:41:31.167896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.167929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.168164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.168195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.168429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.168461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.168699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.168731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.168905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.168938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.169064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.169096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.169357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.169389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.169575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.169607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.169797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.169851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.170051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.170082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.170319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.170351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.170525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.170556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.170685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.170717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.170983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.171144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.171289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.171560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.171696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.171960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.171993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.172169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.172200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.172381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.172418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.172659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.172691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.172860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.172893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.173010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.173042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.173224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.173256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.173356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.173387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.173570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.173602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.173838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.173872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.174894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.174926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.175135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.175166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.175429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.175460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.175632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.175664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.175849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.175881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.176059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.176090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.176478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.176510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.176701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.176733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.176850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.176882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.177152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.177183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.177368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.177400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.177595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.177626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.177827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.177861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.634 qpair failed and we were unable to recover it. 00:30:53.634 [2024-12-09 10:41:31.178003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.634 [2024-12-09 10:41:31.178034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.178224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.178256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.178519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.178763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.179904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.179939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.180051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.180082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.180267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.180298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.180472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.180503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.180753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.180790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.181072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.181105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.181295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.181326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.181457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.181731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.181762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.182036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.182276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.182307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.182561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.182593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.182781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.182820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.182953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.182985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.183113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.183144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.183354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.183387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.183652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.183683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.183877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.183911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.184190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.184223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.184481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.184512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.184626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.184657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.184837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.184871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.184994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.185149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.185313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.185452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.185655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.185795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.185837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.186082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.186113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.186300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.186553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.186584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.186776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.186996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.187028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.187237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.187269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.187516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.187547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.187724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.187757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.187902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.187934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.188115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.188146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.188259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.188290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.188502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.188711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.188742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.188928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.188962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.635 [2024-12-09 10:41:31.189132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.635 [2024-12-09 10:41:31.189163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.635 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.189354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.189385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.189575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.189613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.189741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.189772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.189914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.189946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.190057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.190088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.190221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.190251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.190388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.190419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.190679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.190975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.191007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.191188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.191219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.191389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.191420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.191634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.191666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.191869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.192024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.192055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.192232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.192263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.192479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.192510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.192702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.192734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.192906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.192939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.193096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.193316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.193346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.193537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.193569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.193736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.193768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.193939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.194182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.194213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.194472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.194503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.194626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.194656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.194780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.194818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.194995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.195027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.195214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.195244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.195449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.195481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.195698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.195877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.195911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.196096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.196127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.196248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.196278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.196466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.196497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.196702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.196733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.196919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.196953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.197069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.197100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.197284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.197316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.197511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.197542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.197715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.197746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.197862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.197900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.198022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.198054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.198249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.198280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.198544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.198576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.198706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.198738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.198926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.198960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.199147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.199302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.199333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.199515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.199734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.199765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.636 qpair failed and we were unable to recover it. 00:30:53.636 [2024-12-09 10:41:31.199971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.636 [2024-12-09 10:41:31.200003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.200212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.200243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.200430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.200461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.200700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.200731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.200931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.200965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.201137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.201169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.201307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.201338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.201512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.201734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.201765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.201941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.201973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.202162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.202446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.202477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.202753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.202784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.203056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.203088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.203259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.203290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.203474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.203505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.203619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.203895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.203929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.204045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.204077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.204542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.204829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.204863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.205044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.205076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.205262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.205293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.205551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.205583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.205749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.205779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.205918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.205950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.206081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.206113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.206280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.206310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.206577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.206608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.206845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.206884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.206990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.207020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.207205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.207237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.207422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.207454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.207751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.207930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.207963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.208077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.208109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.208354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.208591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.208622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.208806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.208853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.209079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.209110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.209363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.209394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.209582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.209613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.209860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.209893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.210081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.210113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.210326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.637 [2024-12-09 10:41:31.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.637 [2024-12-09 10:41:31.210540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.637 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.210726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.210948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.210980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.211217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.211248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.211378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.211410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.211538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.211569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.211750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.211780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.211919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.211951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.212196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.212226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.212419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.212450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.212627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.212658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.212903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.212936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.213112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.213144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.213327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.213358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.213473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.213504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.213683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.213713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.213900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.213933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.214060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.214284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.214315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.214511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.214670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.214701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.214882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.214915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.215092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.215123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.215431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.215615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.215652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.215766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.215797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.216078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.216109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.216291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.216322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.216452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.216484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.216610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.216640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.216835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.216868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.217056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.217343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.217375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.217485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.217516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.217636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.217667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.217898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.218073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.218104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.218368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.218399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.218579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.218610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.218722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.218754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.218939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.218971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.219233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.219264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.219452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.219483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.219677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.219707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.219982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.220196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.220568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.220778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.220930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.220962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.221219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.221250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.221444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.221476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.638 qpair failed and we were unable to recover it. 00:30:53.638 [2024-12-09 10:41:31.221594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.638 [2024-12-09 10:41:31.221625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.221885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.221918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.222021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.222052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.222227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.222258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.222376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.222407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.222650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.222682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.222857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.222890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.223133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.223163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.223347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.223378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.223553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.223584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.223718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.223748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.223919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.223951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.224188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.224225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.224398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.224428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.224540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.224571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.224831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.224865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.225049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.225080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.225205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.225236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.225475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.225508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.225688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.225719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.225901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.225934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.226145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.226177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.226359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.226390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.226527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.226558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.226791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.226831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.227022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.227053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.227279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.227483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.227752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.227784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.228001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.228033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.228270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.228300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.228431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.228463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.228579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.228610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.228780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.228822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.229043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.229164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.229195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.229309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.229340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.229518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.229550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.229825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.230076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.230108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.230227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.230259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.230520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.230551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.230684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.230715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.230949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.230983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.231169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.231308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.231339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.231509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.231540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.231651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.231681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.231859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.231893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.232176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.232382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.639 [2024-12-09 10:41:31.232413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.639 qpair failed and we were unable to recover it. 00:30:53.639 [2024-12-09 10:41:31.232538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.232569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.232710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.232747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.232960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.232993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.233174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.233206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.233387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.233417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.233641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.233853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.233886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.234074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.234341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.234373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.234542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.234573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.234776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.234819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.235001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.235033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.235198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.235487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.235519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.235690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.235721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.235904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.235938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.236177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.236209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.236392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.236423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.236627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.236775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.236806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.236950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.236982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.237174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.237206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.237381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.237413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.237618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.237649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.237832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.237865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.238056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.238087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.238198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.238229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.238418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.238449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.238663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.238694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.238902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.238935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.239193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.239225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.239412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.239443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.239644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.239675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.239867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.239900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.240077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.240108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.240349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.240382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.240618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.240649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.240779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.240822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.241035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.241168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.241199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.241440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.241472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.241668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.241706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.241889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.241922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.242103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.242134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.242344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.242376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.242555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.242586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.242763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.242795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.243004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.243035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.243264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.243296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.243422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.243454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.243646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.243677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.243866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.243899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.244182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.244213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.244424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.640 [2024-12-09 10:41:31.244612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.640 [2024-12-09 10:41:31.244643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.640 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.244897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.244931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.245141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.245172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.245423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.245455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.245628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.245659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.245770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.245802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.246077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.246108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.246233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.246264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.246469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.246500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.246678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.246710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.246891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.246924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.247106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.247137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.247323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.247354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.247476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.247508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.247746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.247778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.247995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.248029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.248311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.248342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.248550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.248797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.249002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.249034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.249244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.249275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.249378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.249410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.249614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.249645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.249831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.249864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.250048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.250266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.250298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.250474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.250505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.250700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.250738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.250877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.250911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.251100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.251131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.251316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.251347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.251554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.251716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.251747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.251876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.251908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.252115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.252146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.252325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.252356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.252473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.252505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.252704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.252735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.252914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.252951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.253232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.253492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.253524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.253780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.253818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.254005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.254036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.254286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.254317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.254451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.254591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.254622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.254791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.254832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.255003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.255035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.255217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.255249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.641 qpair failed and we were unable to recover it. 00:30:53.641 [2024-12-09 10:41:31.255439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.641 [2024-12-09 10:41:31.255470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.255659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.255691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.255796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.255847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.256051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.256082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.256261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.256293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.256478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.256510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.256818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.256852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.257045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.257077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.257276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.257407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.257438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.257625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.257657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.257846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.258064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.258095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.258337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.258368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.258497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.258528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.258708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.258739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.258924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.258956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.259131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.259161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.259371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.259580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.259613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.259783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.259821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.260081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.260112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.260236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.260268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.260462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.260626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.260657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.260827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.261074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.261229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.261442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.261583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.261979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.262011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.262190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.262221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.262359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.262390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.262496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.262528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.262821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.262854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.262971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.263002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.263235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.263450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.263481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.263720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.264032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.264064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.264230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.264261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.264436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.264705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.264737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.264850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.264884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.265124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.265156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.265324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.265529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.265560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.265796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.265837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.266019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.266051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.266170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.266474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.642 [2024-12-09 10:41:31.266505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.642 qpair failed and we were unable to recover it. 00:30:53.642 [2024-12-09 10:41:31.266687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.266720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.266956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.266989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.267123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.267155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.267328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.267360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.267606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.267637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.267757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.267789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.267982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.268013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.268189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.268221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.268346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.268377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.268558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.268590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.268721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.269062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.269275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.269483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.269686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.269862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.269984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.270015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.270288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.270320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.270429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.270459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.270700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.270731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.270956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.270990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.271249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.271281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.271566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.271744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.271776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.272052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.272085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.272256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.272289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.272462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.272493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.272699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.272731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.272982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.273140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.273352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.273510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.273710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.273940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.273980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.274246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.274277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.274490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.274521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.274734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.274765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.274918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.274950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.275079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.275110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.275347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.275378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.275552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.275846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.275880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.276000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.276031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.276150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.276182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.276365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.276396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.276580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.276611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.276793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.276833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.277093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.277374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.277405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.277586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.277617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.277826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.278035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.278066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.643 qpair failed and we were unable to recover it. 00:30:53.643 [2024-12-09 10:41:31.278269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.643 [2024-12-09 10:41:31.278300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.278491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.278524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.278627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.278879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.279061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.279093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.279265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.279297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.279415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.279715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.279747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.279972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.280005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.280242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.280273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.280534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.280565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.280698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.280730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.280856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.280889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.281097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.281128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.281309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.281341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.281573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.281760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.281792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.282063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.282095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.282217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.282248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.282418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.282450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.282607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.282781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.282825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.283865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.283898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.284031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.284062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.284297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.284328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.284499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.284531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.284700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.284731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.284847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.284880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.285052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.285084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.285266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.285297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.285484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.285515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.285631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.285663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.285844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.285877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.286111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.286322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.286467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.286702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.286854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.286990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.287214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.287245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.287448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.287673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.644 [2024-12-09 10:41:31.287705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.644 qpair failed and we were unable to recover it. 00:30:53.644 [2024-12-09 10:41:31.287908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.287941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.288190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.288485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.288516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.288633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.288665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.288913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.288947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.289080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.289111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.289299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.289331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.289499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.289700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.289731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.289851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.289885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.290009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.290040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.290209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.290241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.290430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.290461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.290722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.290754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.290943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.290981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.291109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.291141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.291327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.291359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.291476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.291628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.291658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.291922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.291955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.292139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.292170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.292366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.292398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.292572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.292603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.292738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.292769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.292955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.292987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.293170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.293201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.293407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.293622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.293653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.293841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.293874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.294054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.294086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.294264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.294295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.294480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.294511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.294693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.294893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.294927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.295195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.295227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.295401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.295432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.295609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.295640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.295754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.295786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.295923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.295955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.296137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.296169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.296271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.296302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.296508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.296539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.296779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.296820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.297012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.297043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.297285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.297316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.297505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.297537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.297819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.297851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.297970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.298001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.298193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.298224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.298433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.298464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.298566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.645 [2024-12-09 10:41:31.298597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-12-09 10:41:31.298723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.298754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.299017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.299050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.299244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.299276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.299461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.299498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.299819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.299851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.300116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.300148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.300318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.300348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.300637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.300668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.300939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.300972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.301150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.301182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.301364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.301396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.301644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.301675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.301846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.301879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.302070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.302102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.302239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.302269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.302527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.302558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.302736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.302769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.302990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.303022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.303299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.303332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.303520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.303551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.303788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.303825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.304008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.304040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.304227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.304259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.304443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.304474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.304675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.304707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.304836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.304870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.305048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.305078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.305198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.305229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.305489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.305520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.305719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.305751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.305942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.305975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.306148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.306179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.306282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.306313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.306434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.306465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.306645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.306676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.306851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.306883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.307054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.307084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.307264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.307296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.307420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.307452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.307653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.307684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.307921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.307954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.308189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.308221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.308350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.308381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.308570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.308607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.308781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.308823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.309014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.309045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.309286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.309317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.309501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.309533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.309708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.309739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.309982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.310015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.310139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.310171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-12-09 10:41:31.310343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.646 [2024-12-09 10:41:31.310375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.310548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.310579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.310752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.310783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.310986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.311211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.311424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.311591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.311816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.311952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.311984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.312174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.312206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.312389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.312663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.312694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.312867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.312900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.313107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.313138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.313407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.313439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.313652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.313684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.313872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.313906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.314177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.314308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.314339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.314548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.314581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.314691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.314723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.314889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.315011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.315043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.315257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.315288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.315408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.315440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.315555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.315587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.315847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.315880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.316143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.316174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.316294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.316326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.316448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.316479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.316664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.316696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.316863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.316896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.317071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.317272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.317302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.317420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.317453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.317628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.317659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.317896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.317929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.318052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.318083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.318269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.318301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.318423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.318454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.318662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.318694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.318933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.318967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.319231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.319263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.319450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.319481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.319597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.319628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.319873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.319906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.320100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.320132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.320328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.647 [2024-12-09 10:41:31.320360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-12-09 10:41:31.320559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.320590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.320778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.320960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.320991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.321293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.321326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.321523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.321555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.321679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.321710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.321829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.321862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.321997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.322028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.648 [2024-12-09 10:41:31.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.648 [2024-12-09 10:41:31.322257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.648 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.322471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.322503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.322632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.322664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.322839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.322872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.323003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.323036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.323220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.323252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.323419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.323450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.323710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.323743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.323873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.323905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.324030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.324061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.324167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.324199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.324389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.324421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.324687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.324718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.324953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.324985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.325232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.325264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.325453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.325485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.325624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.325660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.325870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.325903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.326143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.326175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.326371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.326402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.326576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.326607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.326852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.326886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.327074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.327105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.327288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.327319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.327504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.327536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.327652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.327683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.327885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.328035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.328066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.328240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.328272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.328469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.328500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.328621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.328654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.328879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.328913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.329085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.329116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.329300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.329331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.329435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.329466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.329704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.329735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.933 [2024-12-09 10:41:31.329866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.933 [2024-12-09 10:41:31.329898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.933 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.330088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.330119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.330363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.330394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.330507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.330539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.330721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.330753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.330972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.331004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.331217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.331249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.331589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.331660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.331828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.331866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.332063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.332097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.332344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.332377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.332489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.332521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.332660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.332692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.332930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.332963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.333099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.333131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.333302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.333335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.333521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.333552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.333831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.333864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.333976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.334008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.334130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.334163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.334451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.334493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.334690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.334723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.334970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.335004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.335138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.335170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.335309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.335341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.335529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.335561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.335758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.335790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.335980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.336012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.336181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.336213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.336332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.336364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.336600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.336632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.336826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.336860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.337929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.337966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.934 qpair failed and we were unable to recover it. 00:30:53.934 [2024-12-09 10:41:31.338154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.934 [2024-12-09 10:41:31.338187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.338366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.338397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.338580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.338612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.338829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.338864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.339037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.339069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.339303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.339335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.339597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.339629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.339822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.339856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.340067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.340116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.340403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.340473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.340632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.340667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.340787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.341022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.341054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.341235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.341266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.341453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.341485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.341677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.341709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.341908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.341942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.342149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.342180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.342393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.342424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.342554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.342585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.342783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.342997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.343029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.343257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.343287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.343416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.343449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.343632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.343665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.343833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.343865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.344033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.344065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.344240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.344271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.344526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.344558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.344767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.344798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.345089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.345120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.345337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.345368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.345563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.345594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.345840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.346106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.346137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.346392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.346423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.346637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.346680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.346820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.346852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.935 [2024-12-09 10:41:31.347047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.935 [2024-12-09 10:41:31.347080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.935 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.347269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.347300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.347489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.347521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.347667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.347796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.347839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.348100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.348131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.348307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.348339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.348464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.348495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.348629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.348861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.348895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.349026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.349057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.349158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.349189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.349430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.349462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.349703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.349735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.349867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.349900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.350095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.350126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.350313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.350345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.350499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.350740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.350771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.350963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.350995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.351180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.351457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.351488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.351673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.351705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.351971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.352004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.352203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.352234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.352397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.352634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.352666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.352852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.352886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.353068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.353099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.353340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.353372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.936 qpair failed and we were unable to recover it. 00:30:53.936 [2024-12-09 10:41:31.353562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.936 [2024-12-09 10:41:31.353594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.353859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.353892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.354128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.354160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.354399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.354430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.354637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.354668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.354927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.354959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.355148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.355179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.355366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.355398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.355535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.355566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.355751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.355783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.355924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.355956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.356104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.356135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.356331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.356362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.356532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.356562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.356744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.356776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.357004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.357036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.357217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.357249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.357368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.357400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.357579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.357609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.357727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.357759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.358011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.358044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.358256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.358287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.358418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.358455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.358637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.358668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.358934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.358966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.359166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.359198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.937 qpair failed and we were unable to recover it. 00:30:53.937 [2024-12-09 10:41:31.359438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.937 [2024-12-09 10:41:31.359470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.359739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.359770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.359891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.359924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.360111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.360143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.360390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.360422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.360569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.360601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.360788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.360827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.361035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.361068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.361264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.361295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.361481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.361513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.361710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.361742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.361869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.361902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.362074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.362104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.362280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.362312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.362513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.362622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.362653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.362765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.362797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.363051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.363083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.363254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.363460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.363491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.363683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.938 [2024-12-09 10:41:31.363715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.938 qpair failed and we were unable to recover it. 00:30:53.938 [2024-12-09 10:41:31.363977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.364010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.364245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.364276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.364448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.364481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.364663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.364694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.364840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.364874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.365078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.365110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.365276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.365308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.365452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.365483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.365734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.365767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.366005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.366186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.366217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.366403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.366434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.366668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.366699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.366881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.366914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.367098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.367129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.367370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.367402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.367599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.367635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.367835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.367868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.368119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.368152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.368333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.368363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.368624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.368657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.368896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.368928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.369115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.369147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.369263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.369294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.369491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.369765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.369796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.370081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.370203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.939 [2024-12-09 10:41:31.370235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.939 qpair failed and we were unable to recover it. 00:30:53.939 [2024-12-09 10:41:31.370493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.370524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.370650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.370689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.370826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.370859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.371052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.371288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.371319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.371562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.371594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.371710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.371741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.371854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.371887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.372078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.372109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.372292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.372322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.372507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.372538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.372748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.372779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.373043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.373221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.373253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.373430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.373462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.373706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.373738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.373863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.373896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.374136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.374168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.374354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.374385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.374587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.374619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.374865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.374898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.375126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.375158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.375291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.375323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.375619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.375651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.375835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.375868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.376070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.376102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.376290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.376322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.940 qpair failed and we were unable to recover it. 00:30:53.940 [2024-12-09 10:41:31.376531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.940 [2024-12-09 10:41:31.376562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.376775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.376821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.377011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.377043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.377303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.377334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.377528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.377559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.377730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.377912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.377946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.378131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.378162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.378349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.378570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.378602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.378732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.378763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.378958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.378991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.379279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.379311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.379561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.379593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.379779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.379820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.380045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.380078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.380324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.380356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.380483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.380515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.380780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.380823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.381060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.381179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.381212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.381474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.381507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.381695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.381999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.382177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.382209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.382409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.382441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.382621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.382866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.382900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.383020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.383052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.941 qpair failed and we were unable to recover it. 00:30:53.941 [2024-12-09 10:41:31.383267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.941 [2024-12-09 10:41:31.383300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.383547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.383579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.383826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.384051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.384082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.384309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.384341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.384520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.384552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.384685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.384716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.385083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.385202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.385234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.385421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.385452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.385657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.385920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.385953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.386137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.386170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.386363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.386396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.386585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.386617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.386802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.387081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.387113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.387378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.387410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.387584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.387616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.387903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.387937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.388160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.388452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.388483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.388745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.388777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.389061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.389094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.389234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.389266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.389381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.389411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.389645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.389677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.389867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.942 [2024-12-09 10:41:31.389901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.942 qpair failed and we were unable to recover it. 00:30:53.942 [2024-12-09 10:41:31.390014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.390046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.390225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.390257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.390380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.390411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.390589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.390827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.390860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.390992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.391022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.391192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.391223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.391480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.391511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.391722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.391753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.391947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.391979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.392168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.392199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.392411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.392442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.392617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.392654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.392918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.392951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.393216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.393247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.393368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.393400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.393632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.393664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.393910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.393943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.394217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.394248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.394482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.394514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.394695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.394727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.394913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.394946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.395210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.395241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.395367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.395399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.395500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.395531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.395765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.395796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.395988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.396021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.396262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.396294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.396478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.943 [2024-12-09 10:41:31.396719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.943 [2024-12-09 10:41:31.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.943 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.397004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.397037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.397295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.397327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.397457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.397489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.397604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.397635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.398096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.398128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.398320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.398352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.398487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.398519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.398701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.398733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.398913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.398953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.399195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.399226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.399371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.399403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.399692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.399806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.399848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.399978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.400010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.400204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.400236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.400472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.400503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.400793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.400834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.401064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.401096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.401360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.401391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.401631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.401663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.401849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.401882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.401987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.402018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.402212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.402243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.402348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.402379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.402613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.402780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.402820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.402996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.403028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.403290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.403322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.403448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.944 [2024-12-09 10:41:31.403479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.944 qpair failed and we were unable to recover it. 00:30:53.944 [2024-12-09 10:41:31.403739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.403770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.403983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.404139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.404386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.404556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.404714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.404869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.404908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.405098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.405130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.405299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.405331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.405540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.405572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.405777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.405828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.406054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.406086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.406274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.406305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.406542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.406574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.406787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.407075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.407108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.407301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.407333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.407506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.407538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.407783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.407824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.407947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.407979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.408165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.408197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.408457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.408489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.408659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.408692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.408959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.408993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.409130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.409297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.409329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.409517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.409549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.409732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.409763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.410008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.410042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.410256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.410470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.410649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.410681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.410889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.410921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.411036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.411068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.411246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.411278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.411464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.411739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.411965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.945 [2024-12-09 10:41:31.411998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.945 qpair failed and we were unable to recover it. 00:30:53.945 [2024-12-09 10:41:31.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.412294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.412484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.412719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.412750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.412887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.412920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.413109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.413140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.413329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.413360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.413483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.413514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.413731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.413991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.414024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.414261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.414331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.414540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.414575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.414789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.414841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.414965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.414997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.415206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.415355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.415387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.415679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.415832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.415865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.416125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.416293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.416324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.416585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.416617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.416740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.416772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.417043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.417076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.417257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.417289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.417525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.417556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.417875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.418086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.418119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.418235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.418266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.418437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.418629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.418660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.418950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.418984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.419270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.419302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.419421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.419454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.419580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.419612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.419854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.419887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.420091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.420123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.420316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.420348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.420539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.420570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.946 [2024-12-09 10:41:31.420759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.946 [2024-12-09 10:41:31.420791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.946 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.420972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.421005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.421175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.421206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.421389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.421421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.421611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.421643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.421879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.421912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.422030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.422061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.422236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.422269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.422396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.422427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.422684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.422716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.422890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.422924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.423107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.423138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.423326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.423364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.423532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.423564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.423732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.423763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.423903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.423936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.424195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.424228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.424409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.424440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.424695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.424726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.424921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.424955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.425146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.425177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.425351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.425382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.425563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.425596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.425773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.425804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.426008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.426041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.426247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.426278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.426488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.426520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.426643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.426675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.426800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.426843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.947 [2024-12-09 10:41:31.427099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.947 [2024-12-09 10:41:31.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.947 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.427321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.427352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.427543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.427574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.427832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.427866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.428108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.428228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.428259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.428439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.428470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.428647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.428678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.428849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.428883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.429148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.429180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.429389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.429611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.429643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.429887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.429921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.430146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.430318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.430469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.430683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.430851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.430972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.431004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.431121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.431153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.431340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.431371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.431567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.431598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.431782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.431824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.432015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.432052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.432180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.432210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.432469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.432501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.432792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.948 [2024-12-09 10:41:31.432833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.948 qpair failed and we were unable to recover it. 00:30:53.948 [2024-12-09 10:41:31.432961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.432992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.433179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.433211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.433415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.433447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.433632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.433664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.433854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.433888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.434158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.434190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.434317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.434348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.434606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.434638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.434757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.434788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.435009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.435042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.435239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.435271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.435575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.435607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.435742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.435774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.436043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.436077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.436206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.436237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.436354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.436386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.436497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.436529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.436821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.436854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.437049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.437081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.437208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.437240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.437436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.437468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.437584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.949 [2024-12-09 10:41:31.437615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.949 qpair failed and we were unable to recover it. 00:30:53.949 [2024-12-09 10:41:31.437734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.437765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.437976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.438010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.438119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.438151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.438389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.438421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.438611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.438643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.438834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.438868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.439048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.439081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.439290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.439321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.439425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.439457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.439656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.439688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.439905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.439939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.440173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.440357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.440517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.440550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.440738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.440776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.441044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.441076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.441264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.441296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.441483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.441514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.441652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.441780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.441831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.442090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.442121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.442355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.442387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.442550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.442752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.442783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.442980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.443012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.443164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.443196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.443369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.443400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.443503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.443535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.950 qpair failed and we were unable to recover it. 00:30:53.950 [2024-12-09 10:41:31.443760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.950 [2024-12-09 10:41:31.443793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.443922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.443955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.444190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.444222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.444408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.444439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.444624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.444655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.444830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.444863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.445051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.445082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.445197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.445229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.445416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.445448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.445629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.445861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.445894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.446075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.446107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.446230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.446261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.446558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.446590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.446767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.446798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.447894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.447927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.448112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.448144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.448313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.448344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.448524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.448556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.448745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.448777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.449051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.449085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.449276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.449314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.449487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.449519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.449771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.449804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.951 qpair failed and we were unable to recover it. 00:30:53.951 [2024-12-09 10:41:31.450018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.951 [2024-12-09 10:41:31.450050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.450291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.450323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.450580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.450613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.450729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.450760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.450987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.451202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.451234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.451350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.451382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.451566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.451597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.451785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.451958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.451989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.452177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.452209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.452332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.452365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.452537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.452570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.452772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.452804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.453068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.453100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.453309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.453341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.453549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.453582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.453696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.453728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.453964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.453999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.454109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.454140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.454331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.454363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.454539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.454573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.454773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.454805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.454987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.455019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.455229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.455263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.455500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.455531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.455706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.455738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.455930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.455963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.456158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.456189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.456369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.456401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.456584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.952 [2024-12-09 10:41:31.456801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.952 [2024-12-09 10:41:31.456840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.952 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.456976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.457135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.457286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.457442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.457581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.457831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.457871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.458122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.458154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.458343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.458374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.458554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.458586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.458833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.458867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.459104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.459136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.459342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.459373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.459577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.459609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.459728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.459760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.459879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.459911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.460185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.460392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.460424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.460551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.460582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.460722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.461002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.461176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.461208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.461339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.461371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.461540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.461573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.461848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.461881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.461984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.462016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.462208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.462240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.462542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.462729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.462761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.463032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.463065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.463192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.463224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.463434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.463467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.463639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.463670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.463939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.463973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.464167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.464198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.464378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.464409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.464619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.464844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.464879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.953 [2024-12-09 10:41:31.465068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.953 [2024-12-09 10:41:31.465100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.953 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.465233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.465265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.465449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.465480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.465719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.465751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.465878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.465911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.466123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.466154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.466260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.466292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.466496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.466528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.466699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.466737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.466905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.467133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.467165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.467369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.467400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.467571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.467603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.467791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.467832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.468041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.468074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.468342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.468374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.468481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.468513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.468752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.468783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.468900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.468933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.469118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.469150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.469409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.469440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.469640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.469672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.469854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.469888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.470075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.470108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.470222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.470254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.470368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.470400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.470583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.470615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.470851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.470884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.471005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.471037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.471260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.471448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.471479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.471665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.471697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.471972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.472006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.472187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.472219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.472416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.472448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.472683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.954 [2024-12-09 10:41:31.472879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.954 [2024-12-09 10:41:31.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.954 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.473018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.473295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.473327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.473495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.473527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.473761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.473793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.473985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.474016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.474152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.474182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.474368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.474399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.474648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.474680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.474857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.474892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.475008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.475042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.475312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.475345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.475556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.475597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.475776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.475816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.475992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.476026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.476434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.476467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.476634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.476667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.476852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.477069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.477102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.477338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.477371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.477566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.477599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.477789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.477965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.478115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.478145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.478317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.478350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.478608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.478642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.478828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.478863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.479148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.479186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.479377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.479410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.479619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.479731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.479765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.479951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.479986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.955 qpair failed and we were unable to recover it. 00:30:53.955 [2024-12-09 10:41:31.480174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.955 [2024-12-09 10:41:31.480207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.480381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.480415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.480590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.480623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.480884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.480919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.481056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.481089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.481291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.481325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.481500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.481534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.481704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.481737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.481915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.481949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.482148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.482182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.482394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.482428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.482609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.482643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.482756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.482789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.482921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.482955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.483065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.483098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.483281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.483314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.483423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.483457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.483575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.483608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.483781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.483826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.484964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.484997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.485180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.485213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.485326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.485359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.485601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.485635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.485757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.485790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.486089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.486124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.486365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.486399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.486635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.486668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.486801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.486846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.487048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.487081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.487255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.487289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.956 [2024-12-09 10:41:31.487484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.956 [2024-12-09 10:41:31.487518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.956 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.487692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.487726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.487914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.487951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.488137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.488171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.488358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.488390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.488650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.488822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.488856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.489074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.489300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.489462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.489622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.489780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.489963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.490035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.490303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.490340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.490530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.490832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.490867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.491042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.491075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.491261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.491294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.491406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.491438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.491570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.491603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.491820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.491854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.492102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.492136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.492336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.492369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.492552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.492585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.492706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.492739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.493003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.493038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.493280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.493314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.493495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.493528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.493651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.493684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.493822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.493857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.494138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.494383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.494559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.494592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.494794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.494839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.494965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.494997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.495259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.495477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.495510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.495638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.495670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.957 [2024-12-09 10:41:31.495799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.957 [2024-12-09 10:41:31.495849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.957 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.495982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.496015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.496205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.496239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.496499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.496532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.496744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.496777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.497021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.497056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.497243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.497276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.497553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.497586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.497776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.497820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.498013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.498046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.498256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.498290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.498509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.498543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.498823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.498857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.499048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.499081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.499281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.499315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.499554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.499588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.499764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.499797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.500024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.500059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.500267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.500300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.500476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.500510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.500773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.501094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.501127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.501255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.501289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.501484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.501518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.501695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.501728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.502012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.502048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.502261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.502295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.502556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.502595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.502857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.502891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.503025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.503058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.503267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.503301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.503437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.503470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.503660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.503693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.503882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.503917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.504131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.504164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.504338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.504371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.504647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.958 [2024-12-09 10:41:31.504680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.958 qpair failed and we were unable to recover it. 00:30:53.958 [2024-12-09 10:41:31.504802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.504845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.504963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.504996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.505178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.505211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.505396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.505429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.505675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.505708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.505934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.505970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.506102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.506135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.506323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.506355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.506616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.506778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.506820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.507007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.507223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.507255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.507377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.507410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.507686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.507718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.507843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.507877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.508080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.508354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.508386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.508630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.508800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.508844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.509020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.509052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.509166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.509199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.509386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.509420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.509697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.509909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.509941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.510124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.510155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.510342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.510371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.510483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.510512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.510635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.510664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.510853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.510883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.511108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.511137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.511339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.511600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.511632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.511829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.511862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.959 [2024-12-09 10:41:31.512057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.959 [2024-12-09 10:41:31.512086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.959 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.512324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.512354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.512531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.512561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.512697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.512909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.512941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.513109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.513140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.513561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.513592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.513886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.513917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.514046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.514078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.514207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.514238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.514365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.514541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.514573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.514840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.514873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.515057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.515087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.515286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.515318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.515735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.515766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.516041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.516073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.516255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.516287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.516414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.516446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.516618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.516650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.516885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.516919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.517176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.517208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.517473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.517686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.517898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.517931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.518170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.518204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.518327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.518361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.518585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.518825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.518859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.519047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.519080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.519275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.519309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.519567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.519600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.519729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.519959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.519994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.520176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.520208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.520379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.520411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.520593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.960 [2024-12-09 10:41:31.520626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.960 qpair failed and we were unable to recover it. 00:30:53.960 [2024-12-09 10:41:31.520803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.521140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.521173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.521306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.521339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.521457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.521490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.521623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.521656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.521918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.521953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.522132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.522165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.522350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.522383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.522497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.522531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.522714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.522747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.523010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.523044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.523256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.523289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.523413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.523445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.523652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.523691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.523847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.523882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.524011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.524045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.524228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.524261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.524448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.524481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.524821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.524854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.525055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.525088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.525282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.525315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.525581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.525614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.525786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.525830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.526014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.526047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.526162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.526334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.526367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.526568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.526603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.526888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.526925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.527103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.527137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.527308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.527340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.527533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.527566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.527747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.527780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.527914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.527946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.528086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.528119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.528316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.528350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.528538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.528570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.528682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.528716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.961 qpair failed and we were unable to recover it. 00:30:53.961 [2024-12-09 10:41:31.528956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.961 [2024-12-09 10:41:31.528991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.529094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.529127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.529419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.529458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.529577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.529611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.529821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.529855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.530052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.530085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.530255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.530288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.530418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.530451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.530681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.530714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.530892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.530928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.531188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.531220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.531523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.531744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.531777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.532064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.532428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.532464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.532733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.532769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.533078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.533114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.533305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.533338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.533550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.533582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.533863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.534102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.534135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.534256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.534289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.534528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.534562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.534753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.534786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.535002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.535035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.535237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.535270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.535398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.535431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.535612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.535645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.535853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.535887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.536043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.536317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.536350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.536483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.536517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.536768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.536802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.537034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.537279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.537438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.537611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.962 [2024-12-09 10:41:31.537759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.962 qpair failed and we were unable to recover it. 00:30:53.962 [2024-12-09 10:41:31.537904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.537938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.538111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.538145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.538358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.538484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.538786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.538825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.538942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.539163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.539197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.539380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.539413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.539616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.539650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.539783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.539834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.540046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.540080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.540337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.540586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.540721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.540902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.540933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.541357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.541389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.541528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.541561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.541742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.541774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.541952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.542170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.542204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.542388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.542589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.542622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.542745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.542778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.543113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.543332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.543369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.543619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.543654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.543764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.543798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.544033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.544068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.544251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.544284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.544474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.544506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.544632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.544665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.544859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.544895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.545065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.545222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.545437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.545594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.963 [2024-12-09 10:41:31.545852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.963 qpair failed and we were unable to recover it. 00:30:53.963 [2024-12-09 10:41:31.545974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.546119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.546327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.546475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.546679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.546890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.546925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.547940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.547975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.548162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.548195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.548321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.548354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.548539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.548572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.548946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.548979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.549200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.549361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.549393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.549580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.549613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.549793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.549837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.550014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.550047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.550233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.550266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.550455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.550488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.550676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.550709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.550859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.551074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.551107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.551286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.551319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.551580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.551613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.551796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.551839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.551971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.552005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.552160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.552350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.552382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.552570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.552602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.964 qpair failed and we were unable to recover it. 00:30:53.964 [2024-12-09 10:41:31.552819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.964 [2024-12-09 10:41:31.552854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.553044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.553077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.553341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.553373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.553487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.553520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.553708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.553740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.553868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.553903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.554022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.554056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.554161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.554192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.554435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.554468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.554589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.554623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.554905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.555191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.555224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.555425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.555635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.555674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.555958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.555991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.556231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.556264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.556461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.556494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.556749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.556781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.556981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.557015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.557192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.557224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.557447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.557687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.557720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.557957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.557992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.558112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.558145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.558315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.558348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.558658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.558834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.558873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.559149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.559182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.559419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.559452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.559579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.559612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.559876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.559909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.560045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.560077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.560265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.560298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.560486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.560520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.560781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.560823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.560959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.560991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.561138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.965 qpair failed and we were unable to recover it. 00:30:53.965 [2024-12-09 10:41:31.561340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.965 [2024-12-09 10:41:31.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.561494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.561527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.561659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.561692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.561959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.562032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.562232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.562270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.562391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.562425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.562607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.562640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.562746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.562779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.563272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.563306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.563437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.563470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.563703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.563967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.564002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.564118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.564150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.564424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.564671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.564705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.564950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.564999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.565127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.565158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.565397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.565430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.565631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.565664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.565796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.565846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.566017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.566051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.566173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.566205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.566454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.566486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.566674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.566895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.566931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.567062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.567096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.567230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.567262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.567437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.567469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.567712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.567746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.567944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.567979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.568177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.568211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.568330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.568364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.568489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.568520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.568636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.568667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.966 qpair failed and we were unable to recover it. 00:30:53.966 [2024-12-09 10:41:31.568782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.966 [2024-12-09 10:41:31.568823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.568964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.568997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.569403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.569436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.569622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.569657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.569844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.569878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.570067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.570099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.570362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.570603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.570636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.570882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.570916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.571184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.571217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.571483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.571516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.571708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.571742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.571985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.572019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.572199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.572233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.572417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.572450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.572640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.572671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.572914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.572949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.573079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.573111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.573351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.573385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.573576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.573610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.573778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.573837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.574121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.574155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.574362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.574395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.574578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.574611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.574832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.574876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.575073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.575106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.575291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.575324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.575562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.575595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.575846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.575881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.576067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.576100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.576376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.576410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.576652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.576842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.576878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.577146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.577180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.967 [2024-12-09 10:41:31.577366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.967 [2024-12-09 10:41:31.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.967 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.577636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.577670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.577851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.577885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.578075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.578109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.578346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.578378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.578645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.578678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.578803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.578849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.579143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.579330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.579363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.579599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.579633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.579826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.579861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.580153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.580187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.580306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.580339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.580632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.580906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.580944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.581117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.581151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.581383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.581416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.581531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.581563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.581743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.581776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.581974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.582128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.582269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.582498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.582858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.582891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.583030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.583061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.583251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.583284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.583466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.583500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.583670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.583702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.583893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.583926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.584127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.584158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.584346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.584379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.968 qpair failed and we were unable to recover it. 00:30:53.968 [2024-12-09 10:41:31.584636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.968 [2024-12-09 10:41:31.584668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.584909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.584943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.585144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.585347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.585380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.585502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.585536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.585726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.585760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.585979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.586014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.586198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.586230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.586367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.586399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.586704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.586953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.586987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.587185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.587218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.587489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.587678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.587842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.587877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.587997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.588029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.588299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.588331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.588539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.588573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.588710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.588742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.588868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.588902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.589023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.589056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.589247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.589286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.589527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.589559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.589777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.590060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.590093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.590283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.590316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.590436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.590469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.590650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.590682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.590866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.590900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.591142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.591175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.591360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.591393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.591633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.591666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.591780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.969 [2024-12-09 10:41:31.591819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.969 qpair failed and we were unable to recover it. 00:30:53.969 [2024-12-09 10:41:31.591948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.591981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.592135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.592259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.592291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.592413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.592446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.592620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.592654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.592831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.592866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.593067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.593100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.593341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.593374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.593607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.593641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.593846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.593880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.594107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.594140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.594394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.594428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.594690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.594722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.594927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.594962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.595084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.595118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.595245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.595279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.595449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.595482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.595657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.595691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.595903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.595939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.596158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.596191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.596502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.596747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.596780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.597035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.597068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.597202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.597234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.597416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.597449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.597620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.597654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.597851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.597886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.598062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.598096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.598338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.598377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.598631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.598665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.598801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.598844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.599085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.599119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.599232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.599265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.599481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.599618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.599651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.599826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.599861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.970 [2024-12-09 10:41:31.600039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.970 [2024-12-09 10:41:31.600072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.970 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.600192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.600225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.600343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.600376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.600621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.600846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.600881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.601055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.601088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.601282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.601316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.601501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.601533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.601775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.601819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.602018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.602052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.602161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.602195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.602377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.602411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.602671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.602705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.602874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.603139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.603173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.603392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.603632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.603665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.603912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.603947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.604073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.604106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.604352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.604386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.604507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.604777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.604818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.604998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.605031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.605267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.605300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.605496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.605529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.605714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.605748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.605883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.605917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.606103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.606136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.606310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.606344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.606593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.606626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.606741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.606774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.606981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.607016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.607264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.607302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.607502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.607536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.607671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.607705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.607829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.607865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.608137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.608171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.608376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.971 [2024-12-09 10:41:31.608548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.971 [2024-12-09 10:41:31.608581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.971 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.608768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.608801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.608993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.609026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.609150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.609184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.609444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.609595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.609629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.609800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.609850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.610037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.610070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.610324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.610356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.610578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.610610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.610715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.610748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.610992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.611027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.611298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.611332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.611712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.611745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.611931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.611967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.612110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.612143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.612326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.612358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.612536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.612570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.612718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.612751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.612950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.612986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.613168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.613201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.613318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.613352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.613526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.613560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.613733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.613767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.613952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.613986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.614125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.614157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.614309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.614439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.614471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.614661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.614694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.614890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.614926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.615103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.615136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.615317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.615350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.615459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.615492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.615738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.615777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.616029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.616064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.616245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.616277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.616460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.616493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.972 [2024-12-09 10:41:31.616756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.972 [2024-12-09 10:41:31.616789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.972 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.617293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.617327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.617511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.617544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.617679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.617714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.617904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.617939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.618131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.618164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.618371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.618404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.618572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.618604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.618720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.618755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.618945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.618981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.619158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.619192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.619410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.619525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.619560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.619749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.619782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.619926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.619958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.620060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.620092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.620284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.620317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.620558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.620591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.620782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.620824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.620946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.620979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.621085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.621115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.621287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.621321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.621510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.621544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.621827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.621862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.621995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.622215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.622353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.622518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.622816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.622961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.622994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.623110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.623144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.623384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.623416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.623601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.623634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.973 [2024-12-09 10:41:31.623870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.973 [2024-12-09 10:41:31.623906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.973 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.624146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.624180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.624589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.624622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.624861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.624894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.625079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.625111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.625293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.625324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.625515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.625550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.625657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.625691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.625797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.625848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.626020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.626054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.626295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.626328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.626593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.626626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.626880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.626914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.627153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.627186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.627376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.627410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.627657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.627690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.627861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.627897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.628092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.628126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.628309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.628341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.628527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.628560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.628734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.974 [2024-12-09 10:41:31.628989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:53.974 qpair failed and we were unable to recover it. 00:30:53.974 [2024-12-09 10:41:31.629183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.629216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.629388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.629421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.629601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.629633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.629847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.629882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.630075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.630108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.630211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.630243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.630728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.630940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.630975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.631092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.631125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.631239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.631271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.631553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.631585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.631773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.631806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.632010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.632043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.632283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.632316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.632558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.632590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.632853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.633064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.633097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.633357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.633390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.633529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.633567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.633741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.633776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.634023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.634058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.634184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.634217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.634676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.634710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.634928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.634963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.635099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.635132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.635265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.635300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.635477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.635510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.635638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.635671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.635856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.635890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.636162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.636196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.636402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.636437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.636717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.636750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.636959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.636995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.637174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.253 [2024-12-09 10:41:31.637207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.253 qpair failed and we were unable to recover it. 00:30:54.253 [2024-12-09 10:41:31.637405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.637438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.637574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.637606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.637788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.637831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.638067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.638099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.638205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.638249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.638456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.638489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.638739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.638773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.638983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.639018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.639207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.639240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.639423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.639456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.639594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.639627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.639800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.639843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.640019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.640234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.640267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.640442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.640476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.640664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.640697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.640825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.640859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.641234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.641389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.641597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.641847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.641977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.642115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.642323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.642472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.642674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.642885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.642919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.643163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.643198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.643319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.643351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.643483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.643516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.643702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.643736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.643864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.644034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.644067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.644239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.644272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.644443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.644475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.644685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.644719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.645000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.645120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.645153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.645324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.645357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.645528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.645560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.645841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.646015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.646048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.646175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.646207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.646389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.646423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.646610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.646829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.646863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.647044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.647078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.647267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.647300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.647470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.647502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.647682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.647715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.647987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.648120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.648155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.648346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.648586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.648619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.648817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.648852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.649031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.649064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.649244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.649277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.649385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.649418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.649603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.649636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.649877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.649912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.650165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.650197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.650407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.254 [2024-12-09 10:41:31.650440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.254 qpair failed and we were unable to recover it. 00:30:54.254 [2024-12-09 10:41:31.650706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.650746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.650879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.650913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.651021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.651054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.651235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.651268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.651400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.651434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.651556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.651589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.651851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.651887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.652014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.652049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.652286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.652319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.652506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.652540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.652747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.652779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.652898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.652932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.653221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.653255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.653446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.653478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.653623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.653657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.653833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.653867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.654038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.654071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.654256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.654488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.654521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.654660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.654693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.654964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.654999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.655257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.655291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.655412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.655445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.655635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.655668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.655929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.655964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.656138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.656170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.656419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.656452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.656723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.656758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.656899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.656933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.657127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.657162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.657265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.657298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.657502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.657631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.657842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.657877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.658056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.658090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.658202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.658232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.658400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.658433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.658605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.658639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.658825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.658860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.659067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.659101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.659311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.659350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.659469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.659501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.659627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.659661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.659851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.659886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.660151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.660184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.660587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.660619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.660862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.660897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.661031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.661064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.661237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.661270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.661523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.661558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.661786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.661972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.662006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.662195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.662229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.662355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.662636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.662668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.662788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.662829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.663076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.663111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.663242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.663275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.663487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.663521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.663723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.663757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.663956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.663991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.664236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.255 [2024-12-09 10:41:31.664270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.255 qpair failed and we were unable to recover it. 00:30:54.255 [2024-12-09 10:41:31.664462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.664736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.664771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.664961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.664996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.665246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.665280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.665503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.665576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.665784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.665839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.666061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.666095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.666284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.666319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.666514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.666547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.666726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.666760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.666965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.667000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.667243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.667276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.667416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.667450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.667731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.667917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.667953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.668188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.668424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.668457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.668657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.668690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.668910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.669139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.669173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.669397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.669616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.669649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.669833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.669867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.669999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.670032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.670225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.670257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.670545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.670578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.670890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.671159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.671195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.671435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.671469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.671656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.671688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.671869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.671903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.672100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.672140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.672346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.672377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.672486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.672518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.672756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.672790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.672941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.672975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.673149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.673182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.673356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.673389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.673642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.673675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.673893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.674073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.674106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.674354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.674387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.674573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.674607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.674826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.674860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.675056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.675089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.675267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.675299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.675535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.675568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.675755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.675788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.675970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.676005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.676182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.676215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.676346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.676380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.676553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.676586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.676775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.676820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.677082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.677116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.677304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.677336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.677509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.677846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.256 [2024-12-09 10:41:31.678105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.256 [2024-12-09 10:41:31.678138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.256 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.678328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.678367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.678671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.678704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.678825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.678859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.678987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.679021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.679204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.679237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.679418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.679452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.679717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.679750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.679953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.679987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.680179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.680348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.680638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.680672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.680877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.680911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.681037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.681070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.681204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.681237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.681358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.681391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.681510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.681542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.681785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.681834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.682016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.682049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.682315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.682348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.682596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.682630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.682805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.683024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.683058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.683247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.683279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.683454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.683487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.683744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.683777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.683898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.683932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.684064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.684098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.684305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.684534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.684567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.684681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.684715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.684838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.684874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.685059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.685092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.685378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.685411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.685617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.685650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.685772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.685806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.685954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.685987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.686099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.686132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.686261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.686294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.686568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.686601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.686801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.686844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.687058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.687092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.687294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.687328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.687621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.687654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.687917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.687951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.688055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.688088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.688300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.688334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.688646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.688679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.688800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.688845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.688961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.688995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.689184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.689218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.689454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.689488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.689596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.689630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.689895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.689930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.690146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.690179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.690372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.690405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.690606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.690641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.690761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.690794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.690974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.691008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.691216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.691479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.691513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.691751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.691784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.691982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.692016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.692288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.692321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.692605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-09 10:41:31.692639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-09 10:41:31.692831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.692867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.692991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.693024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.693253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.693285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.693719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.693844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.693879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.694058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.694091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.694365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.694399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.694523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.694556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.694832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.695072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.695335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.695631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.695664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.695905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.695940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.696126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.696160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.696395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.696597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.696630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.696835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.696993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.697027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.697217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.697249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.697438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.697471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.697718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.697844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.697879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.698063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.698096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.698205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.698238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.698452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.698485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.698587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.698620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.698823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.698858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.699081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.699114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.699289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.699322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.699454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.699487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.699674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.699713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.699922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.699956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.700150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.700184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.700319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.700352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.700485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.700519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.700778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.700819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.701044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.701258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.701479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.701634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.701790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.701995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.702030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.702218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.702252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.702427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.702460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.702705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.702963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.703000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.703126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.703159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.703343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.703377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.703564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.703596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.703721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.703755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.703969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.704003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.704137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.704169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.704424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.704458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.704570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.704602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.704842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-09 10:41:31.704876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-09 10:41:31.704995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.705027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.705163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.705196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.705383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.705422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.705616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.705649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.705887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.705922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.706037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.706071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.706271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.706377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.706411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.706598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.706631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.706876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.706913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.707205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.707376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.707408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.707581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.707743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.707776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.707975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.708009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.708294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.708328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.708522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.708556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.708728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.708762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.708976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.709010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.709197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.709231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.709517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.709550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.709671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.709705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.709945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.709980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.710121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.710154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.710346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.710380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.710553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.710586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.710764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.710798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.711049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.711083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.711379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.711556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.711589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.711818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.712093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.712315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.712350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.712470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.712503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.712706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.712992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.713203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.713236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.713425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.713722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.713755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.713961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.713995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.714246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.714279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.714460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.714493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.714690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.714724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.714869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.714904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.715110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.715285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.715320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.715367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112cb20 (9): Bad file descriptor 00:30:54.259 [2024-12-09 10:41:31.715669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.715742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.716033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.716154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.716188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.716365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.716400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.716640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.716806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.716850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.717089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.717122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.717262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.717297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.717533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.717565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.717676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-09 10:41:31.717709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-09 10:41:31.718012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.718346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.718383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.718513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.718545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.718805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.718849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.719112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.719145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.719299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.719332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.719439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.719473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.719649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.719682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.719871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.719906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.720020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.720164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.720197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.720443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.720476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.720761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.720795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.720987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.721226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.721259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.721441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.721474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.721660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.721694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.721933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.721969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.722153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.722186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.722358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.722391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.722507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.722538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.722747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.722780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.723044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.723077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.723259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.723293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.723433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.723466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.723643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.723676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.723881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.723915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.724124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.724163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.724408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.724441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.724636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.724669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.724798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.724842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.725093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.725126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.725313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.725346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.725579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.725612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.725793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.725838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.725962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.725996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.726107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.726140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.726311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.726344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.726582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.726614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.726717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.726868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.726909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.727156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.727189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.727428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.727461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.727643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.727876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.727911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.728096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.728129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.728253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.728524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.728558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.728771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.728805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.729099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.729133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.729269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.729302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.729553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.729587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.729713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.729745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.729952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.729987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.730207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.730240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.730433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.730466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.730701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.730888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.730923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.731118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.731150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.731393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.731517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.731550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.731819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.731852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.732040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.732074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.732243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.732277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-09 10:41:31.732452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-09 10:41:31.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.732653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.732686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.732877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.732913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.733143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.733215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.733506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.733727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.733762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.733960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.733995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.734178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.734212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.734448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.734482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.734591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.734624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.734866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.735075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.735107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.735232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.735265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.735526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.735558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.735662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.735695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.735961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.735997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.736132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.736165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.736306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.736338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.736600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.736633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.736753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.736786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.737007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.737149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.737182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.737440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.737730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.737764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.737940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.738060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.738268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.738304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.738579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.738614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.738884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.738920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.739116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.739149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.739332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.739365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.739546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.739579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.739768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.739801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.739977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.740191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.740225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.740440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.740595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.740628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.740824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.740859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.741070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.741104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.741322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.741354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.741590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.741624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.741757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.741790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.741979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.742013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.742140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.742179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.742367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.742400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.742515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.742548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.742788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.742834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.743110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.743269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.743476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.743629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.743871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.743979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.744013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.744180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.744361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.744394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.744660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.744692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.744880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.744915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.745110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.745144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-09 10:41:31.745318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-09 10:41:31.745351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.745578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.746005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.746037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.746161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.746193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.746378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.746410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.746593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.746627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.746832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.746867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.747010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.747043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.747287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.747319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.747490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.747522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.747775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.747818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.748070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.748105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.748235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.748267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.748391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.748424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.748635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.748668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.748950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.748984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.749168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.749201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.749414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.749593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.749626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.749741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.749774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.749930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.749964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.750150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.750183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.750353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.750387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.750511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.750544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.750760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.750942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.750977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.751159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.751192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.751375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.751409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.751528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.751562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.751753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.751786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.752032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.752065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.752342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.752478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.752511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.752754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.752788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.752974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.753008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.753123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.753156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.753351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.753384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.753518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.753551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.753744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.753776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.753968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.754041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.754254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.754291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.754535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.754570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.754765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.754799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.755001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.755300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.755333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.755544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.755577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.755864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.755990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.756023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.756194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.756489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.756523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.756779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.756825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.757003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.757040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.757225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.757260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.757495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.757528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.757767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.757799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.757989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.758023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.758284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.758606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.758639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.758829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.758864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.759056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.759089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.759288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.759322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.759502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.759535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.759652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-09 10:41:31.759684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-09 10:41:31.759949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.759984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.760224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.760264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.760453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.760486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.760615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.760648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.760829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.760864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.761102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.761134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.761269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.761471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.761504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.761632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.761664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.761937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.761972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.762088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.762121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.762303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.762336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.762520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.762553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.762727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.762760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.762885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.762919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.763190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.763223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.763339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.763372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.763582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.763615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.763735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.763768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.763901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.763935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.764179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.764211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.764348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.764381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.764671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.764704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.764836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.764870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.765142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.765175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.765365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.765398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.765606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.765639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.765827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.765861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.766084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.766117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.766296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.766330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.766569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.766602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.766775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.766816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.767065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.767098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.767216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.767249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.767436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.767469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.767655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.767687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.767862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.767897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.768137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.768170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.768434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.768467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.768595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.768629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.768830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.768864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.769054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.769099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.769276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.769310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.769425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.769458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.769636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.769669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.769922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.769958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.770107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.770233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.770265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.770517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.770551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.770747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.770780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.770959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.771031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.771236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.771467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.771502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.771682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.771715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.771897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.771932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.772079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.772112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.772379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.772411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.772617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.772649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.772774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.772829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.773045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.773078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.773262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.773442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.773577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.773610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-09 10:41:31.773722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-09 10:41:31.773755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.773995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.774029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.774185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.774303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.774335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.774508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.774540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.774648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.774685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.774976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.775048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.775314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.775528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.775561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.775746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.775779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.775992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.776026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.776219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.776252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.776426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.776459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.776698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.776730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.776862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.776897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.777078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.777111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.777244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.777277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.777396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.777426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.777672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.777706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.777980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.778014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.778151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.778184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.778385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.778419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.778608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.778641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.778826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.778861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.779041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.779353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.779543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.779577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.779824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.779859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.780127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.780160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.780331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.780364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.780477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.780510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.780695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.780728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.780864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.780905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.781085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.781119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.781330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.781566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.781600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.781736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.781769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.781895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.781930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.782132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.782165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.782283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.782317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.782492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.782526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.782760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.782956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.783223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.783256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.783434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.783466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.783655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.783900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.783935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.784220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.784253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.784434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.784468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.784664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.784697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.784961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.784997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.785184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.785216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.785403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.785436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.785573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.785606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.785780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.785824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.785953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.785987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.786234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.786267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.786438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.786471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.786710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.786743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.786948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-09 10:41:31.786994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-09 10:41:31.787169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.787202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.787383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.787416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.787601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.787873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.787908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.788186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.788219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.788480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.788513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.788645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.788678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.788798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.788842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.788965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.788998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.789185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.789218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.789332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.789365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.789555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.789588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.789773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.790049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.790083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.790264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.790298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.790537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.790571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.790764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.790797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.791040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.791074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.791250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.791283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.791472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.791506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.791742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.791774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.792033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.792067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.792265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.792299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.792611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.792731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.792764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.793044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.793079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.793211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.793245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.793515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.793548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.793690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.793723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.793905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.793940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.794222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.794255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.794433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.794707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.794739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.795003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.795038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.795154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.795187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.795313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.795345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.795531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.795564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.795861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.796101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.796134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.796391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.796425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.796691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.796725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.796908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.796943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.797085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.797118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.797303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.797337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.797477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.797510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.797685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.797718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.797838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.797873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.798001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.798033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.798208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.798241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.798478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.798511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.798684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.798717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.798900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.798934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.799112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.799145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.799271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.799304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.799519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.799553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.799804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.799846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.800038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.800072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.800204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.800237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.800418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.800450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.800632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.800666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.800879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.800913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.801179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.801212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.801481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.801684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-09 10:41:31.801901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-09 10:41:31.801936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.802086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.802119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.802348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.802542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.802580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.802819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.802854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.803040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.803074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.803261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.803294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.803557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.803591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.803782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.803823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.803993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.804026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.804248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.804281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.804524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.804753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.804786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.804961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.805153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.805187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.805370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.805403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.805670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.805703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.805829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.805864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.806074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.806108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.806316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.806349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.806565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.806598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.806858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.806893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.807165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.807198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.807383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.807417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.807645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.807833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.807866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.808068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.808101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.808222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.808255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.808440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.808472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.808737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.809072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.809111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.809241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.809447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.809481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.809643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.809846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.809881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.810019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.810052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.810238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.810271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.810553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.810586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.810768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.810801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.810949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.810982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.811248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.811281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.811514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.811548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.811709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.811829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.811865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.812009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.812042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.812190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.812223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.812347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.812380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.812510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.812543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.812725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.812759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.813025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.813059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.813247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.813281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.813526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.813560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.813677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.813710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.813967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.814002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.814290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.814324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.814446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.814479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.814663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.814882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.814923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.815167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.815201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.815373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.815406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.815581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.815613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.815782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.815825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.816088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.816120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-09 10:41:31.816415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-09 10:41:31.816587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.816620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.816822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.816857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.816969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.817215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.817247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.817434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.817468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.817642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.817675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.817867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.817903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.818092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.818126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.818253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.818285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.818407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.818441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.818702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.818735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.818917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.818951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.819080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.819113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.819329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.819363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.819620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.819653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.819847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.819881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.820007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.820041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.820168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.820200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.820384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.820417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.820625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.820658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.820849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.820884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.821016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.821050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.821263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.821296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.821470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.821504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.821742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.821776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.822022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.822094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.822364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.822401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.822536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.822571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.822825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.822863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.822985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.823018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.823268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.823302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.823547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.823581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.823754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.823787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.823974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.824007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.824223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.824257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.824517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.824550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.824670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.824703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.824840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.824876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.824995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.825028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.825235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.825268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.825459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.825492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.825665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.825696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.825881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.825916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.826098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.826131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.826313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.826346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.826468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.826502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.826639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.826671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.826926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.826967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.827145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.827178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.827389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.827422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.827546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.827828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.827862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.827985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.828017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.828209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.828241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.828350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.828382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.828656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.828689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.828931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.828965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.829139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.829173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.267 [2024-12-09 10:41:31.829347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.267 [2024-12-09 10:41:31.829379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.267 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.829499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.829532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.829717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.829751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.829894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.829928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.830173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.830206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.830426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.830598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.830631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.830769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.830802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.830918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.830951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.831062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.831095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.831332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.831365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.831552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.831585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.831688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.831718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.831959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.831992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.832115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.832147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.832329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.832363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.832490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.832523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.832655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.832688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.832871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.833092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.833124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.833259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.833291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.833574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.833608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.833861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.833896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.834084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.834117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.834244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.834276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.834493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.834525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.834718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.835957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.835992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.836173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.836206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.836335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.836368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.836497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.836530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.836721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.836753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.836980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.837152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.837186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.837310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.837343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.837519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.837552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.837658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.837692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.837964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.837999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.838133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.838166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.838376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.838419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.838693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.838726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.838939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.838974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.839168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.839202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.839377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.839410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.839681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.839715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.839964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.839999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.840173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.840207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.840322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.840355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.840465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.840497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.840737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.840770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.840900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.840935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.841192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.841263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.841409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.841446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.841733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.841768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.841908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.841943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.842140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.842174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.842363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.842397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.842595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.842628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.842922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.268 [2024-12-09 10:41:31.843107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.268 [2024-12-09 10:41:31.843140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.268 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.843279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.843448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.843480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.843596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.843629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.843758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.843791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.843979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.844140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.844345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.844504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.844664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.844869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.844903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.845074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.845107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.845226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.845259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.845499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.845533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.845766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.845799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.845994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.846027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.846142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.846175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.846286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.846319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.846435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.846468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.846765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.846799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.847056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.847089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.847201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.847235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.847427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.847461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.847698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.848007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.848042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.848168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.848201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.848388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.848420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.848612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.848646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.848924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.849140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.849173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.849354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.849387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.849629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.849661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.849932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.849973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.850147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.850182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.850464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.850497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.850734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.850766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.851019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.851053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.851176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.851209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.851345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.851378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.851613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.851646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.851829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.851865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.852059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.852092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.852295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.852328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.852447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.852481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.852691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.852724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.852906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.853162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.853433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.853466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.853581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.853614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.853728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.853762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.853953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.853988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.854171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.854204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.854396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.854429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.854671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.854704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.854879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.854914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.855104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.855137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.855242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.855276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.855535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.855568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.855754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.855787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.855984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.856024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.856280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.856312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.856550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.856583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.856696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.856729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.269 qpair failed and we were unable to recover it. 00:30:54.269 [2024-12-09 10:41:31.856865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.269 [2024-12-09 10:41:31.856901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.857162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.857195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.857335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.857368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.857481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.857514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.857684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.857717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.857836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.857871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.858137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.858171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.858347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.858380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.858568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.858601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.858788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.858956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.858990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.859178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.859212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.859396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.859622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.859655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.859914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.859949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.860139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.860173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.860369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.860403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.860671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.860704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.860968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.861004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.861188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.861222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.861406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.861440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.861696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.861729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.861926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.861962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.862138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.862181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.862387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.862420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.862674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.862708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.862912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.862946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.863123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.863339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.863372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.863516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.863549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.863722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.863754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.864006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.864040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.864227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.864260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.864449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.864481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.864664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.864698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.864932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.864967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.865163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.865195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.865325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.865359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.865595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.865827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.865862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.866168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.866201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.866417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.866449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.866641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.866674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.866888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.866922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.867212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.867393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.867426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.867596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.867749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.867782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.868003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.868037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.868276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.868310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.868441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.868480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.868661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.868696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.868892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.868927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.869190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.869384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.869586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.869619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.869818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.869852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.870020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.870053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.870290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.870324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.270 qpair failed and we were unable to recover it. 00:30:54.270 [2024-12-09 10:41:31.870511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.270 [2024-12-09 10:41:31.870544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.870726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.870759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.871029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.871065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.871345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.871381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.871609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.871791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.871836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.872084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.872118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.872303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.872337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.872442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.872476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.872635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.872933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.873072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.873105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.873366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.873400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.873578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.873611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.873877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.874171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.874298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.874331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.874454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.874719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.874753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.874955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.874991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.875113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.875146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.875366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.875399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.875581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.875616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.875858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.876034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.876068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.876259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.876293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.876417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.876643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.876676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.876830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.876865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.877130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.877163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.877431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.877464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.877656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.877689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.877885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.877920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.878106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.878139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.878252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.878285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.878404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.878438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.878613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.878646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.878835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.878870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.879052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.879086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.879275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.879309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.879594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.879628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.879832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.879867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.880105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.880139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.880374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.880494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.880527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.880715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.880748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.880942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.880977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.881091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.881123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.881265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.881299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.881586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.881620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.881794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.881839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.882105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.882139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.882317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.882351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.882588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.882621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.882802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.882863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.883090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.883124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.883319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.883353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.883471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.883506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.883680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.883712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.883980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.884022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.884324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.884561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.884594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.884788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.884830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.885006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.885039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.885303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.271 [2024-12-09 10:41:31.885336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.271 qpair failed and we were unable to recover it. 00:30:54.271 [2024-12-09 10:41:31.885478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.885511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.885638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.885671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.885790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.885832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.886027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.886060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.886253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.886286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.886493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.886526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.886705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.886738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.886858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.886893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.887080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.887113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.887237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.887271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.887461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.887495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.887713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.887746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.887870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.887904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.888094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.888128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.888326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.888360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.888562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.888596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.888778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.888819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.888959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.888992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.889257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.889290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.889427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.889460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.889647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.889680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.889933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.889974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.890122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.890156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.890301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.890335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.890619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.890653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.890869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.890904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.891178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.891213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.891460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.891493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.891723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.891756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.891963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.891997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.892171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.892204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.892416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.892449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.892644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.892676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.892893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.892927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.893164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.893197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.893394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.893427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.893614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.893647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.893879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.893914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.894018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.894051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.894236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.894268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.894386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.894419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.894658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.894692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.894856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.894891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.895948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.895983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.896184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.896217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.896392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.896592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.896840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.897032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.897065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.897256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.897289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.897541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.897575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.897830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.897865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.898140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.898174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.898347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.272 [2024-12-09 10:41:31.898381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.272 qpair failed and we were unable to recover it. 00:30:54.272 [2024-12-09 10:41:31.898552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.898584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.898865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.898900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.899087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.899120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.899316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.899351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.899588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.899621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.899749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.899782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.899927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.899960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.900144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.900177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.900361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.900394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.900630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.900662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.900985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.901097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.901129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.901314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.901603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.901636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.901750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.901782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.901994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.902028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.902280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.902314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.902460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.902493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.902608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.902641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.902832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.902868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.903052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.903085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.903324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.903550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.903583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.903759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.903792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.903971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.904127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.904277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.904424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.904635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.904855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.904889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.905021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.905059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.905304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.905337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.905536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.905569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.905691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.905725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.905926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.905960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.906086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.906119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.906354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.906387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.906565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.906598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.906788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.906842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.906957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.906990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.907233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.907266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.907533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.907566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.907749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.907783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.907915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.907949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.908066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.908100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.908203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.908235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.908402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.908435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.908641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.908674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.908791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.908835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.909104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.909137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.909264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.909296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.909569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.909603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.909798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.909844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.910025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.910057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.910253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.910286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.910454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.910487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.910625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.910657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.910944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.911112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.911145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.911353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.911386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.911520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.911554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.911845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.911880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.912099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.912132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.273 [2024-12-09 10:41:31.912311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.273 [2024-12-09 10:41:31.912345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.273 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.912528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.912561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.912799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.912841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.913032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.913066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.913333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.913365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.913620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.913653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.913836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.913870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.913975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.914009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.914153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.914186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.914642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.914676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.914855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.914890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.915156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.915324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.915357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.915485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.915518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.915723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.915848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.915883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.916049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.916228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.916451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.916667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.916817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.916990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.917023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.917218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.917252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.917508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.917541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.917714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.917747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.917886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.917920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.918182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.918215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.918378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.918571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.918719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.918978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.919013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.919214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.919246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.919462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.919495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.919719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.919905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.919941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.920210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.920364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.920398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.920619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.920652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.920803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.920854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.920982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.921290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.921490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.921524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.921711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.921745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.921867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.921902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.922138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.922172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.922361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.922394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.922582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.922615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.922863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.922898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.923163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.923196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.923379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.923413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.923622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.923655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.923842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.923877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.924114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.924147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.924428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.924461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.924644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.924678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.924918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.924952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.925142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.925175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.925385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.925418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.925622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.925655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.925766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.925815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.925985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.926212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.926245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.274 [2024-12-09 10:41:31.926447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.274 qpair failed and we were unable to recover it. 00:30:54.274 [2024-12-09 10:41:31.926657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.926691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.926806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.926849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.927052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.927085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.927252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.927285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.927455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.927488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.927739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.927771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.927937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.928145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.928179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.928463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.928496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.928697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.928730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.928911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.928945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.929118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.929151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.929439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.929473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.929585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.929618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.929731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.929764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.929900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.929934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.930136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.930169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.930361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.930394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.930661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.930843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.930878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.931073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.931106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.931308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.931554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.931587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.931763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.931796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.932060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.932093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.932266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.932304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.932483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.932517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.932638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.932861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.932895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.933186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.933219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.933340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.933373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.933559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.933592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.933775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.933816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.933994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.934028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.934323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.934572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.934605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.934825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.934860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.935063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.935311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.935344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.935461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.935496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.935629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.935662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.935843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.935878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.936116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.936238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.936271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.936452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.936485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.936720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.936754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.936944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.936979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.937242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.937275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.937601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.937634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.937763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.937797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.937941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.937975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.938150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.938285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.938323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.938513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.938547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.938667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.938701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.938983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.939019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.939203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.939236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.939477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.275 [2024-12-09 10:41:31.939511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.275 qpair failed and we were unable to recover it. 00:30:54.275 [2024-12-09 10:41:31.939625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.939658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.939785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.940032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.940359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.940493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.940526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.940639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.940672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.940841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.941048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.941081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.941478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.941511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.941713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.941747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.941955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.941989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.942171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.942204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.942385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.942417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.942621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.942656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.942781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.942825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.943066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.943100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.943367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.943401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.943637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.943754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.944028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.944062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.944202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.944235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.944431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.944465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.944707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.944741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.944894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.945073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.945106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.945232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.945265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.945534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.945709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.945744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.945879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.945915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.946112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.946145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.946326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.946360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.946480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.946513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.946697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.946731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.946936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.946971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.947289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.947551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.947587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.947784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.947837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.947975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.948009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.948227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.948261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.948439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.948472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.948615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.948650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.948857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.948892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.949097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.949131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.949256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.949289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.949498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.949530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.949714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.949747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.949932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.949967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.950094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.950136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.950375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.950407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.950521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.950554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.950724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.950757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.950958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.950993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.951114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.951263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.951296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.951533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.951566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.951759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.951792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.951995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.952028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.952222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.952255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.952379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.952412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.952606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.952638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.952830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.952866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.953117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.276 [2024-12-09 10:41:31.953150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.276 qpair failed and we were unable to recover it. 00:30:54.276 [2024-12-09 10:41:31.953334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.953367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.953547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.953581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.953707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.953740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.953936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.953971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.954091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.954124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.954362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.954395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.954574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.954607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.954817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.954851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.954961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.954994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.955174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.955207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.955344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.955378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.955624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.277 [2024-12-09 10:41:31.955657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.277 qpair failed and we were unable to recover it. 00:30:54.277 [2024-12-09 10:41:31.955788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.955830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.956036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.956070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.956255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.956288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.956495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.956758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.956791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.956916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.956949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.957072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.957105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.957398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.957431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.957541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.957575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.957694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.957727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.957843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.957877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.958057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.958090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.958283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.958316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.958424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.958462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.958602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.958779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.958819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.959024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.959057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.959320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.959354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.959613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.959858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.959893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.960083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.960116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.960254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.960287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.960558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.960591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.960782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.960822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.961015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.961048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.961289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.961321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.961504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.961745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.961778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.961982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.962015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.962227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.962259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.962379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.962412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.962651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.962683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.962873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.962908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.963148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.963182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.963386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.963419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.963615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.963648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.963755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.963789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.963932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.963965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.964150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.964182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.964364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.964397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.964602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.964636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.964819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.964853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.964980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.965014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.965132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.965164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.965297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.965328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-09 10:41:31.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-09 10:41:31.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.965666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.965699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.965889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.965923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.966113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.966146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.966337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.966370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.966508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.966540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.966665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.966697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.966880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.966915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.967047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.967086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.967201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.967233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.967409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.967442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.967680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.967714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.968005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.968038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.968150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.968181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.969635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.969694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.970032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.970070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.970211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.970246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.970488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.970523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.970782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.970827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.971020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.971054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.971269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.971303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.971553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.971586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.971870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.971995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.972332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.972480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.972702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.972866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.972900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.973097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.973131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.973268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.973301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.973584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.973764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.973797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.974004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.974039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.974264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.974337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.974484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.974641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.974676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.974795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.974854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.975122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.975157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.975345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.975378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.975621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.975654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.975893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.976100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.976133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.976340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.976374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.976500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.976532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.976714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.976747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.976945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.976979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.977160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.977193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.977321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.977354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.977474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.977507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.977611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.977645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.977856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.977890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.978008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.978041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.978212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.978376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.978408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.978516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.978550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.978662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.978695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.980181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.980238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.980364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.980401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.980667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.980960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.980997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-09 10:41:31.981240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-09 10:41:31.981278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.981538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.981572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.981701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.981925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.981960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.982068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.982101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.982204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.982238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.982426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.982459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.982563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.982596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.982909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.982944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.983073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.983106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.983225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.983258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.983434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.983466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.983638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.983671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.983783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.983826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.984013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.984046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.984237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.984270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.984454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.984486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.984666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.984699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.984955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.984990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.985174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.985209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.985350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.985383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.985590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.985623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.985745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.985778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.985928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.985961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.986087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.986122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.986332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.986364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.986538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.986748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.986782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.987046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.987080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.987268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.987301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.987409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.987442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.987616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.987648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.987855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.987889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.988061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.988092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.988286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.988321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.988447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.988479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.988616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.988854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.988890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.989002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.990431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.990490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.990640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.990680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.990894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.990933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.991109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.991139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.991319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.991352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.991561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.991593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.991745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.992022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.992058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.992234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.992267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.992463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.992497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.992721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.992829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.992863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.993101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.993134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.993309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.993342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.993520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.993551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.993754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.993787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.993992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.994215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.994565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.994726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.994939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.994972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.995256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.995289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-09 10:41:31.995402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-09 10:41:31.995434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.995611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.995644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.995830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.995863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.995974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.996132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.996304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.996576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.996782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.996940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.996972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.997110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.997143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.997318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.997349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.997509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.997630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.997662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.997847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.997881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.998063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.998095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.998265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.998297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.998407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.998439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.998635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.998668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.998909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.998970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.999081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.999112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.999299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.999331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.999510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.999542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.999718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.999749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:31.999866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:31.999901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.000956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.000989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.001112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.001243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.001273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.001424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.001538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.001570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.001742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.001775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.002063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.002096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.002341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.002373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.002480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.002696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.002860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.002894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.003030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.003062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.003217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.003330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.003362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.003548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.003581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.003848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.003883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.004069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.004101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.004255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.004380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.004411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.004608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.004640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.004833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.004867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.005949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.005983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-09 10:41:32.006153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-09 10:41:32.006184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.006424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.006457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.006702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.006740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.006858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.006892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.007177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.007208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.007421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.007538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.007569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.007820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.007852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.008028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.008061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.008244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.008276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.008413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.008535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.008783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.008821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.009897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.009931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.010942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.010973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.011218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.011493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.011525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.011699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.011730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.011874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.011921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.012122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.012154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.012301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.012485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.012518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.012630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.012663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.012838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.012870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.013056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.013087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.013275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.013306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.013433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.013464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.013684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.013716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.013892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.013926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.014105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.014263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.014424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.014645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.014861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.014977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.015143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.015302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.015455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.015600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.015861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.016033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.016064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.016181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.016213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.016483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.016515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.016685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.016716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.016902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.016936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.017123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.017155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.017335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.017367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.017484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.017516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.017688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.017721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.017894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.017927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.018156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.018187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.018301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.018334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.018465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.018496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.018701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-09 10:41:32.018733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-09 10:41:32.018998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.019150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.019304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.019468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.019619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.019860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.019895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.020075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.020107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.020216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.020247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.020399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.020606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.020638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.020821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.020854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.021042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.021075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.021313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.021345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.021530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.021562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.021828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.021862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.022148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.022179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.022293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.022324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.022454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.022485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.022607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.022645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.022886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.022921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.023137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.023168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.023286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.023317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.023453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.023484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.023712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.023745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.023878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.024937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.024970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.025106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.025339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.025528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.025559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.025737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.025769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.025898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.025931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.026103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.026135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.026405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.026437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.026571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.026603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.026777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.026818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.026942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.026973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.027094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.027311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.027342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.027465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.027497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.027682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.027713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.027906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.027940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.028207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.028240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.028361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.028393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.028580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.028612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.028730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.028763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.028941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.029149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.029179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.029352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.029384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.029581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.029613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.029728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.029760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.029903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.029936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.030068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.030099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.030361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-09 10:41:32.030394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-09 10:41:32.030510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.030547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.030669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.030701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.030823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.030856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.030978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.031009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.031177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.031350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.031381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.031646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.031677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.031856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.031890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.032087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.032119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.032322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.032353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.032623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.032656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.032835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.032869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.033833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.033866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.034914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.034947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.035085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.035115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.035245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.035554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.035667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.035828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.036923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.036956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.037125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.037157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.037259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.037291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.037592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.037697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.037730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.037847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.037880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.038105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.038512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.038686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.038852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.038969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.039002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.039194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.039227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.039401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.039845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.039880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.039995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.040027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.040222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.040254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.040495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.040527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.040786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.040830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.040957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.040990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.041177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.041212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.041416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.041551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.041584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.041838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.041873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.042004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.042036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.042149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.042181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.042390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.042423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-09 10:41:32.042543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-09 10:41:32.042575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.042749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.042782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.043020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.043054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.043170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.043203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.043382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.043415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.043628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.043662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.043848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.043882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.044035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.044182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.044418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.044627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.044861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.044969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.045129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.045341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.045508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.045721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.045920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.045954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.046063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.046101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.046305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.046337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.046527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.046561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.046674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.046707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.046957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.046992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.047187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.047220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.047328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.047360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.047543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.047576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.047716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.047749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.047861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.047895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.048136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.048167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.048362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.048395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.048591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.048624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.048866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.048899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.049792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.049832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.050945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.050979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.051099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.051132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.051287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.051359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.051642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.051678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.051935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.051972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.052112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.052144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.052334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.052365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.052626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.052659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.052848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.052883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.053018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.053049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.053238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.053271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.053465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.053497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.053737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.053768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.053900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.053934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.054064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.054096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.054284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.054316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.054472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.054506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.054639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.054670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.054845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.054878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.055013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.055046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.055203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-09 10:41:32.055336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-09 10:41:32.055368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.055477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.055509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.055619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.055839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.055872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.056000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.056032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.056240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.056271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.056389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.056421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.056633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.056666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.056856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.056895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.057947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.057978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.058143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.058254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.058287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.058459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.058489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.058723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.058754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.058958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.058991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.059276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.059438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.059615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.059850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.059988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.060223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.060365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.060586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.060799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.060966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.060998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.061116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.061148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.061261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.061292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.061541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.061573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.061834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.061867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.062069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.062107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.062320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.062352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.062595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.062627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.062920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.062953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.063131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.063162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.063433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.063466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.063596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.063628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.063869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.063903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.064098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.064130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.064258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.064507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.064539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.064823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.064856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.065025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.065234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.065266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.065485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.065517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.065707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.065739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.065868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.065900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.066086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.066119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.066331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.066363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.066624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.066655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.066898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.066931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.067055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.067086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.067267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.067571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.067604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.067717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.067748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.563 [2024-12-09 10:41:32.067944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.563 [2024-12-09 10:41:32.067978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.563 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.068124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.068156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.068276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.068312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.068514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.068546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.068828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.068862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.068995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.069026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.069162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.069193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.069324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.069356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.069656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.069687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.069801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.069844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.069971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.070007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.070214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.070387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.070738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.070772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.071059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.071094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.071339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.071375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.071603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.071637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.071888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.071922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.072135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.072167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.072299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.072333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.072665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.072697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.072872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.072905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.073053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.073087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.073274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.073307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.073541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.073573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.073869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.073904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.074052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.074085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.074268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.074301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.074519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.074552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.074822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.074855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.075108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.075141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.075437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.075470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.075730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.075762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.076064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.076098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.076298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.076331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.076532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.076564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.076754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.076787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.076958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.076992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.077145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.077176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.077360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.077392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.077709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.077743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.078024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.078057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.078245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.078277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.078589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.078628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.078762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.078794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.079041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.079074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.079198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.079231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.079442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.079474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.079732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.079764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.079973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.080007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.080114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.080146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.080361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.080393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.080720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.080753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.081040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.081074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.081345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.081377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.081723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.081959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.082109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.082141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.082335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.082368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.082627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.082880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.082912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.083167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.083200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.083448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.083479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.564 [2024-12-09 10:41:32.083687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.564 [2024-12-09 10:41:32.083719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.564 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.084027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.084061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.084205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.084237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.084362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.084393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.084652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.084685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.084907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.084940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.085133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.085165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.085312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.085350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.085612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.085644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.085773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.085804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.086059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.086253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.086285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.086614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.086646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.086891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.086926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.087162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.087374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.087407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.087711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.087743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.087993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.088026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.088152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.088183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.088456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.088489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.088755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.088786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.089020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.089052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.089209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.089241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.089505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.089538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.089854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.089887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.090108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.090140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.090285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.090316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.090494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.090526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.090826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.090860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.091136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.091392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.091423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.091642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.091676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.091796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.091838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.092056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.092088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.092267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.092305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.092584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.092617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.092851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.092884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.093032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.093066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.093307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.093561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.093592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.093781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.093822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.094087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.094119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.094301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.094332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.094608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.094641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.094953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.094986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.095203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.095234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.095453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.095486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.095677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.095708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.095932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.095965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.096261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.096593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.096890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.097073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.097106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.097328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.097614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.097646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.097911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.097946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.098086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.098117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.098301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.098333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.098551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.098583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.098870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.099142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.099174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.099326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.099358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.099629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.099661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.099945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.565 [2024-12-09 10:41:32.100192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.565 [2024-12-09 10:41:32.100225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.565 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.100366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.100398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.100669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.100700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.100902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.100937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.101109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.101140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.101272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.101305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.101568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.101603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.101818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.101852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.101999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.102030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.102168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.102200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.102389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.102725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.102757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.103014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.103047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.103180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.103455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.103487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.103685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.103951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.103983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.104163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.104194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.104437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.104469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.104607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.104639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.104768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.104800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.104961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.104994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.105235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.105588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.105802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.105847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.106117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.106148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.106292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.106324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.106523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.106557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.106743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.106775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.106925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.106959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.107206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.107241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.107413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.107445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.107716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.107931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.107966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.108098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.108131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.108343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.108376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.108641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.108675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.108859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.108891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.109205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.109249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.109401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.109433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.109678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.109710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.109902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.109935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.110205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.110238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.110562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.110594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.110776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.110816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.110964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.110996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.111189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.111221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.111430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.111462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.111657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.111690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.111907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.111941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.112134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.112165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.112302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.112335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.112491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.112524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.112832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.112978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.113012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.113160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.113194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.113429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.113463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.113688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.113721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.566 [2024-12-09 10:41:32.114044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.566 qpair failed and we were unable to recover it. 00:30:54.566 [2024-12-09 10:41:32.114236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.114270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.114554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.114587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.114874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.114908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.115185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.115347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.115380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.115520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.115552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.115842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.115881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.116097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.116130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.116266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.116297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.116577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.116804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.116856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.116996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.117029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.117237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.117268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.117524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.117557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.117762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.117794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.118046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.118078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.118282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.118316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.118677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.118709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.118908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.118941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.119073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.119106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.119314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.119346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.119534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.119565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.119781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.119945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.119977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.120113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.120145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.120279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.120312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.120441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.120472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.120713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.120745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.120896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.120929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.121054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.121086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.121280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.121312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.121545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.121578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.121856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.121890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.122087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.122118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.122320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.122356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.122502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.122825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.122858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.123045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.123079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.123248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.123460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.123492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.123600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.123634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.123922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.123955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.124128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.124160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.124365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.124398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.124609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.124640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.124990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.125023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.125244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.125277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.125598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.125671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.125915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.125955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.126162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.126416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.126449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.126585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.126617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.126900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.126936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.127088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.127120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.127315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.127346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.127619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.127652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.127780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.127821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.128021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.128055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.128297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.128331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.128533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.128566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.128840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.128882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.129080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.129260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.129292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.567 [2024-12-09 10:41:32.129585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.567 [2024-12-09 10:41:32.129618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.567 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.129865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.129900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.130122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.130155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.130271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.130444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.130478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.130653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.130685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.130913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.130946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.131091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.131124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.131261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.131295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.131564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.131838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.131874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.132100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.132132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.132274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.132307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.132557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.132591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.132781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.133009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.133042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.133240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.133273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.133408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.133440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.133629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.133660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.133861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.133895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.134090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.134122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.134246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.134278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.134619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.134652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.134847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.134880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.135075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.135147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.135378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.135415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.135661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.135694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.135911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.135945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.136141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.136174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.136435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.136466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.136658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.136689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.136887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.136922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.137061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.137093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.137268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.137555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.137587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.137888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.138172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.138203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.138404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.138447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.138664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.138696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.138876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.138909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.139096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.139129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.139276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.139308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.139561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.139764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.139797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.139986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.140021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.140213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.140245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.140492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.140525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.140715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.140746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.140935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.140969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.141161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.141195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.141502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.141534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.141785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.141829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.142044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.142077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.142271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.142302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.142518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.142550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.142770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.142802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.142957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.142989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.143135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.143166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.143388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.143422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.143544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.143575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.143845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.144042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.144075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.144267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.144298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.144580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.568 [2024-12-09 10:41:32.144612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.568 qpair failed and we were unable to recover it. 00:30:54.568 [2024-12-09 10:41:32.144842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.145030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.145062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.145326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.145357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.145544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.145848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.145996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.146398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.146429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.146639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.146672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.146876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.146910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.147121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.147153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.147300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.147332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.147564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.147596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.147825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.147864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.148046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.148269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.148301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.148527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.148558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.148666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.148697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.148955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.148989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.149190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.149222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.149372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.149404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.149650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.149682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.149909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.149942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.150091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.150123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.150311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.150343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.150533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.150564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.150833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.150867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.151105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.151252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.151284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.151404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.151437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.151630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.151662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.151860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.151894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.152041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.152073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.152274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.152307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.152439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.152471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.152585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.152617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.152804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.152859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.153040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.153073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.153355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.153387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.153672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.153706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.154006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.154080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.154247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.154282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.154589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.154881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.155187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.155221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.155538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.155761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.155793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.156012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.156044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.156242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.156275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.156527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.156560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.156727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.156940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.156974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.157104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.157138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.157335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.157367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.157659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.157693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.157892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.157927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.158051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.158083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.158228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.158260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.158571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.158604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.569 qpair failed and we were unable to recover it. 00:30:54.569 [2024-12-09 10:41:32.158736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.569 [2024-12-09 10:41:32.158768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.159031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.159065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.159204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.159236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.159436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.159468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.159715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.159748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.160017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.160051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.160186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.160220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.160377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.160409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.160672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.160712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.160906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.160940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.161137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.161170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.161316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.161348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.161545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.161577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.161793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.161836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.161986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.162019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.162239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.162271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.162483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.162515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.162772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.162804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.163015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.163047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.163197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.163230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.163401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.163620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.163652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.163843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.164024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.164058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.164269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.164301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.164503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.164535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.164737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.164769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.164988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.165020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.165219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.165253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.165398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.165431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.165613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.165645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.165868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.165903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.166103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.166136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.166258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.166290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.166523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.166555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.166776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.166980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.167013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.167206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.167396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.167429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.167608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.167639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.167938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.168171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.168203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.168378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.168410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.168658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.168691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.168930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.168964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.169160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.169192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.169458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.169491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.169759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.169792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.169930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.169963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.170171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.170345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.170377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.170513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.170545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.170752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.170988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.171020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.171197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.171229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.171447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.171484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.171753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.171785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.172039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.172072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.172227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.172261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.172459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.172492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.172705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.172737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.173006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.173041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.173335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.173367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.570 qpair failed and we were unable to recover it. 00:30:54.570 [2024-12-09 10:41:32.173991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.570 [2024-12-09 10:41:32.174025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.174229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.174262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.174502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.174534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.174763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.174796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.174998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.175031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.175216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.175248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.175452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.175486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.175708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.175741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.175953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.175987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.176172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.176206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.176406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.176439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.176710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.176742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.177048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.177086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.177263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.177297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.177443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.177477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.177660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.177694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.177985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.178181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.178213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.178418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.178451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.178636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.178668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.178873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.178907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.179044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.179077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.179272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.179304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.179525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.179557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.179806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.179846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.180047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.180079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.180261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.180293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.180525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.180557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.180802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.180843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.181032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.181066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.181320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.181352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.181647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.181680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.181947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.181981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.182188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.182221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.182354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.182387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.182596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.182628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.182952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.182986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.183182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.183214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.183343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.183376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.183504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.183543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.183739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.183771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.183927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.183961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.184215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.184248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.184382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.184413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.184606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.184638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.184768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.184801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.184921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.184954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.185075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.185107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.185242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.185275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.185469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.185640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.185672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.185803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.185866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.186001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.186034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.186172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.186479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.186513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.186639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.186672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.186794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.186839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.187025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.187057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.187173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.187206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.187413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.571 [2024-12-09 10:41:32.187445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.571 qpair failed and we were unable to recover it. 00:30:54.571 [2024-12-09 10:41:32.187557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.187592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.187788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.187851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.187954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.187987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.188179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.188211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.188348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.188381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.188515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.188546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.188663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.188702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.188956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.188991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.189133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.189165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.189279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.189311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.189549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.189583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.189693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.189726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.189858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.190083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.190115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.190221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.190254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.190437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.190469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.190690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.190722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.191898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.191932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.192131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.192164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.192343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.192376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.192505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.192537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.192680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.192712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.192890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.192924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.193051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.193085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.193206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.193238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.193485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.193517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.193706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.193985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.194146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.194318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.194541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.194682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.194845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.194878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.195138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.195172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.195358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.195389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.195592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.195625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.195827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.195859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.196040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.196072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.196265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.196299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.196423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.196457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.196699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.196732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.196882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.196916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.197149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.197228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.197377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.197413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.197599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.197633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.197757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.197789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.197951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.197985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.198101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.198135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.198307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.198339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.198478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.198713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.198747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.198863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.199020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.199052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.199238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.199272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.199754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.199796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.200021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.200055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.200259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.200292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.572 [2024-12-09 10:41:32.200480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.572 [2024-12-09 10:41:32.200512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.572 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.200722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.200755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.200895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.200929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.201176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.201209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.201408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.201441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.201726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.201759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.201902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.201935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.202136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.202170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.202438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.202470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.202616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.202901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.202936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.203061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.203094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.203273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.203305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.203446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.203479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.203666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.203698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.203899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.203933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.204067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.204100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.204231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.204263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.204452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.204484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.204674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.204707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.204955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.204988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.205229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.205260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.205397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.205431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.205622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.205654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.205803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.205847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.206052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.206085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.206263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.206294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.206528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.206769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.206802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.207942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.207977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.208104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.208419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.208451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.208571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.208610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.208749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.208780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.208912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.208945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.209128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.209162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.209334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.209365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.209628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.209660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.209854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.210073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.210106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.210399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.210431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.210564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.210597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.210816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.210849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.211032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.211065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.211336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.211368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.211499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.211531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.211776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.211818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.212936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.212968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.213097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.213129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.213303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.213336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.213451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.213604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.213636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.213830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.213864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.214057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.214089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.214231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.573 [2024-12-09 10:41:32.214264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.573 qpair failed and we were unable to recover it. 00:30:54.573 [2024-12-09 10:41:32.214451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.214484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.214598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.214630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.214755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.214787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.214914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.214946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.215068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.215100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.215367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.215400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.215510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.215542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.215788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.215830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.215937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.215971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.216080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.216111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.216220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.216251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.216384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.216417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.216655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.216694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.216827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.216860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.217959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.217992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.218177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.218210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.218407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.218440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.218711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.218743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.218887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.219009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.219043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.219307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.219340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.219463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.219495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.219776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.220012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.220044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.220179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.220210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.220475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.220718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.220750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.221020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.221199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.221359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.221584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.221616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.221805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.221846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.222028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.222233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.222264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.222534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.222568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.222752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.222783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.222906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.222937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.223132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.223166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.223509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.223543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.223753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.223785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.223937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.223972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.224117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.224149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.224275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.224307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.224448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.224481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.224747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.224779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.224953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.224987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.225165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.225200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.225500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.225539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.225779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.225823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.226016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.226049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.226235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.226268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.226549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.226833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.574 [2024-12-09 10:41:32.226867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.574 qpair failed and we were unable to recover it. 00:30:54.574 [2024-12-09 10:41:32.227061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.227095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.227280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.227311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.227525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.227558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.227826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.227858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.228077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.228109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.228357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.228390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.228585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.228617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.228849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.229028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.229063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.229249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.229281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.229501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.229534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.229766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.229799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.229993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.230265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.230298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.230546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.230579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.230816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.230850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.231041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.231074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.231228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.231536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.231802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.231845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.231979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.232012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.232245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.232278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.232420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.232451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.232635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.232668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.232788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.232831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.233015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.233048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.233250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.233283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.233536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.233568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.233687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.233719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.233993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.234027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.234271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.234302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.234522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.234554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.234761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.234794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.235080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.235113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.235255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.235294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.235507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.235540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.235851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.236085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.236118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.236381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.236654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.236686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.237000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.237035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.237234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.237267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.237504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.237537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.237759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.237976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.238010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.238195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.238227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.238425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.238457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.238657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.238690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.238906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.238943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.239081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.239311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.239344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.239558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.239724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.239767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.239980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.240222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.240255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.240519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.240551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.240785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.240825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.241048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.241268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.241300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.241442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.241475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.241611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.241643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.241901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.241935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.242140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.242173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.575 qpair failed and we were unable to recover it. 00:30:54.575 [2024-12-09 10:41:32.242409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.575 [2024-12-09 10:41:32.242441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.242655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.242688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.242904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.242939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.243055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.243088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.243292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.243323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.243499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.243532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.243796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.243840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.244049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.244282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.244551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.244583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.244777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.244819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.244970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.245004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.245217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.245249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.245516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.245548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.245730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.245763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.246029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.246063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.246189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.246222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.246403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.246435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.246586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.246619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.246806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.246863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.247018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.247051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.247254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.247286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.247594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.247830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.247864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.248074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.248106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.248234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.248268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.248543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.248576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.248864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.249109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.249143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.249276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.249309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.249634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.249666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.249919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.249953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.250152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.250380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.250412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.250715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.250747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.250967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.251002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.251149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.251182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.251326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.251358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.251568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.251608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.251877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.251910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.252076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.252216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.252249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.252557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.252589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.252733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.252765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.252926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.252960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.253205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.253237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.253379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.253411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.253656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.253855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.254049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.254081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.254250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.254445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.254477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.254685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.254717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.254910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.254944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.255201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.255233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.255410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.255442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.255727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.255759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.255950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.255984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.256127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.256160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.256296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.256329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.256675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.256709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.256908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.256940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.257078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.257112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.257264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.257296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.576 [2024-12-09 10:41:32.257519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.576 [2024-12-09 10:41:32.257551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.576 qpair failed and we were unable to recover it. 00:30:54.577 [2024-12-09 10:41:32.257735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.577 [2024-12-09 10:41:32.257769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.577 qpair failed and we were unable to recover it. 00:30:54.577 [2024-12-09 10:41:32.257971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.577 [2024-12-09 10:41:32.258005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.577 qpair failed and we were unable to recover it. 00:30:54.577 [2024-12-09 10:41:32.258133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.577 [2024-12-09 10:41:32.258166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.577 qpair failed and we were unable to recover it. 00:30:54.577 [2024-12-09 10:41:32.258312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.577 [2024-12-09 10:41:32.258345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.577 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.258639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.258672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.258933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.258967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.259108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.259141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.259249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.259281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.259423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.259455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.259727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.259759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.259910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.259944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.260163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.260197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.260376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.260410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.260728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.260766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.260946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.260979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.261181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.261214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.261471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.261701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.261732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.261960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.261995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.262125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.262156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.262342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.262374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.262659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.262692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.262941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.262974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.263120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.263152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.263358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.263392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.263655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.263686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.263951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.264130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.264165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.264361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.264394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.264649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.264681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.264884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.264919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.265070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.265102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.265250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.265281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.265498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.265532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.265718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.265750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.265895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.265928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.266046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.266079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.266225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.266258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.266410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.868 [2024-12-09 10:41:32.266442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.868 qpair failed and we were unable to recover it. 00:30:54.868 [2024-12-09 10:41:32.266660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.266693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.266911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.266945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.267072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.267121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.267257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.267291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.267643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.267675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.267940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.267973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.268170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.268204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.268402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.268434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.268624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.268656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.268868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.268901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.269042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.269075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.269264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.269296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.269522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.269555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.269757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.269789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.270027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.270067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.270220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.270479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.270511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.270726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.270757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.270996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.271030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.271234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.271265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.271479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.271512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.271729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.271762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.271987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.272022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.272237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.272269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.272506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.272539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.272791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.272835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.272986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.273019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.273156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.273188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.273497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.273530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.273798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.273841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.273996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.274031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.274252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.274284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.274505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.274538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.274734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.274767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.274893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.274925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.275124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.275156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.275406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.275439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.275573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.275604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.275881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.275916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.276170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.276203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.276345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.276378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.276699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.276732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.276916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.276952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.277153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.277185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.277331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.277363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.277573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.277607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.277873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.277907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.278206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.278238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.278463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.278769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.278801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.279027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.279061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.279572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.279604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.279887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.280183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.280223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.280490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.280522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.280719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.280751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.281029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.281264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.281297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.281538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.281718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.281752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.282064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.282097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.869 qpair failed and we were unable to recover it. 00:30:54.869 [2024-12-09 10:41:32.282311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.869 [2024-12-09 10:41:32.282343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.282728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.283005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.283039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.283233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.283543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.283576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.283802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.283844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.284099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.284133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.284285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.284318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.284663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.284695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.284960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.284994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.285207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.285240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.285499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.285532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.285841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.285877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.286036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.286070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.286204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.286237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.286419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.286452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.286669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.286701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.286896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.286931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.287136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.287169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.287519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.287789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.287829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.287982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.288015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.288214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.288247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.288528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.288561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.288822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.289098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.289132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.289405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.289437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.289630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.289909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.289943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.290099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.290132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.290281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.290313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.290446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.290479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.290665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.290709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.290910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.290942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.291133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.291166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.291470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.291503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.291710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.291742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.292055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.292091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.292317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.292350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.292579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.292611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.292982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.293017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.293224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.293257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.293512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.293544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.293839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.293874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.294169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.294203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.294382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.294593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.294834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.294867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.295081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.295505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.295538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.295727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.295760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.295980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.296014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.296159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.296192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.296515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.296549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.296743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.296775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.297005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.297038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.297190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.297460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.297769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.297803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.297956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.297989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.298245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.298279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.298513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.870 [2024-12-09 10:41:32.298546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.870 qpair failed and we were unable to recover it. 00:30:54.870 [2024-12-09 10:41:32.298802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.298846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.298992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.299025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.299227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.299259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.299497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.299529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.299823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.299857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.300111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.300298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.300330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.300660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.300693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.300884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.300918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.301169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.301209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.301364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.301398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.301609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.301858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.301892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.302176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.302210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.302415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.302448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.302699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.303013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.303047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.303300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.303332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.303578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.303610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.303751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.303783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.303959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.303993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.304125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.304157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.304308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.304340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.304614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.304648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.304778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.304821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.305053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.305086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.305287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.305320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.305526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.305559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.306104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.306356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.306389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.306519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.306552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.306834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.306869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.307161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.307313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.307345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.307565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.307597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.307823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.307856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.308084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.308353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.308661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.308694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.308936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.308970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.309103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.309136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.309340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.309372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.309564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.309597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.309720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.309752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.309912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.309946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.310109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.310142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.310480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.310513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.310791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.310835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.311086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.311271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.311304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.311566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.311598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.311833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.311868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.312064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.312096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.312303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.312614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.312647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.871 [2024-12-09 10:41:32.312935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.871 [2024-12-09 10:41:32.312969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.871 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.313221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.313254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.313505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.313538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.313743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.313775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.313925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.313958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.314164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.314198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.314500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.314532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.314746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.314780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.314947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.314982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.315213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.315380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.315412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.315668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.315700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.315947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.315981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.316237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.316269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.316498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.316531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.316866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.317121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.317154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.317352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.317384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.317683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.317716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.317911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.317945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.318104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.318138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.318414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.318446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.318662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.318695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.318913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.318948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.319110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.319144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.319340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.319373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.319664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.319697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.319910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.320228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.320262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.320596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.320630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.320765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.320797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.321085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.321119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.321254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.321578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.321617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.321755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.321788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.321961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.321995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.322128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.322160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.322364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.322397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.322678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.322712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.322988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.323022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.323215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.323566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.323600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.323833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.323866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.324015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.324046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.324245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.324278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.324485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.324517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.324783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.324824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.325033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.325067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.325253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.325285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.325508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.325542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.325827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.325862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.326014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.326047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.326241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.326274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.326406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.326439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.326740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.326773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.326977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.327010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.327293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.327326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.327560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.327594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.327886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.328035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.328068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.328277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.328547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.328580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.328863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.328897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.329096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.329129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.872 [2024-12-09 10:41:32.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.872 [2024-12-09 10:41:32.329418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.872 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.329683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.329716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.330041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.330075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.330351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.330385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.330611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.330644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.330897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.331151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.331337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.331370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.331587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.331620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.331869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.331908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.332119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.332151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.332455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.332488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.332753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.332786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.333020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.333055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.333252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.333284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.333485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.333518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.333760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.333792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.333940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.333974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.334176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.334208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.334420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.334453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.334646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.334678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.334922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.335145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.335179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.335474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.335507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.335771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.335803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.335974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.336008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.336164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.336196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.336469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.336502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.336831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.336866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.337074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.337108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.337315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.337346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.337558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.337592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.337861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.337896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.338102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.338135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.338277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.338311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.338481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.338761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.338795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.339027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.339063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.339204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.339237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.339434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.339468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.339706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.339740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.339914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.339948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.340151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.340184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.340389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.340421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.340730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.340939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.340972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.341179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.341213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.341390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.341729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.341761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.341957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.342004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.342282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.342313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.342537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.342569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.342855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.342890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.343072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.343278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.343312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.343584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.343617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.343822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.343857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.343996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.344029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.344288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.873 [2024-12-09 10:41:32.344320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.873 qpair failed and we were unable to recover it. 00:30:54.873 [2024-12-09 10:41:32.344536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.344569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.344832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.345053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.345087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.345227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.345566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.345599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.345797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.345842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.346067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.346099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.346354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.346388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.346704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.346736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.346917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.346952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.347154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.347187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.347393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.347426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.347719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.347753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.347969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.348003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.348228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.348262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.348571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.348604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.348904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.349170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.349204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.349480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.349513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.349760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.349793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.350062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.350096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.350295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.350328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.350565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.350598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.350865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.350900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.351087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.351120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.351268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.351301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.351513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.351546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.351831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.351865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.352070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.352103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.352291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.352324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.352574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.352612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.352866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.352900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.353204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.353237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.353530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.353562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.353826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.353860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.353999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.354033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.354252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.354284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.354557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.354590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.354829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.354863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.355014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.355047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.355300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.355332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.355619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.355652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.355950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.355984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.356273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.356307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.356644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.356678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.356874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.356908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.357114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.357147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.357352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.357385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.357586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.357619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.357918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.357952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.358162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.358196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.358341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.358633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.358666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.358876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.358911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.359168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.359202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.359349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.359383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.359572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.359606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.359827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.359863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.360138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.360173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.360393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.360426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.360560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.360593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.360879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.360912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.361175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.361208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.361395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.361427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.874 [2024-12-09 10:41:32.361645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.874 [2024-12-09 10:41:32.361677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.874 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.361874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.361909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.362118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.362151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.362310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.362342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.362661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.362898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.362931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.363051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.363090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.363348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.363650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.363684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.363881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.363915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.364252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.364527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.364561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.364852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.364886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.365183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.365216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.365365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.365398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.365620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.365653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.365941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.365976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.366179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.366212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.366352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.366384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.366687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.366720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.366858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.366892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.367095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.367128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.367340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.367374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.367605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.367637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.367865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.368122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.368157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.368359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.368392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.368650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.368684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.368871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.368906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.369052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.369084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.369236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.369269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.369508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.369541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.369828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.370093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.370127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.370402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.370436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.370725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.370757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.370959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.370992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.371209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.371242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.371526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.371560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.371853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.371887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.372023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.372057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.372252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.372285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.372517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.372550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.372839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.372874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.373002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.373034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.373188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.373221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.373409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.373441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.373742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.373775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.373929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.373964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.374220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.374254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.374403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.374436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.374631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.374663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.375001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.375036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.375294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.375327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.375633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.375665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.375854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.375889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.376047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.376080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.376273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.376307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.376561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.376595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.376828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.376862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.377122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.377155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.377321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.377353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.377671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.377704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.377906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.377940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.378205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.875 [2024-12-09 10:41:32.378239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.875 qpair failed and we were unable to recover it. 00:30:54.875 [2024-12-09 10:41:32.378388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.378422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.378634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.378667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.378872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.378906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.379112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.379145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.379367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.379399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.379544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.379578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.379797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.380046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.380304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.380674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.380708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.380953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.380988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.381202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.381234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.381424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.381457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.381711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.382000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.382279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.382312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.382629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.382662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.382871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.382905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.383055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.383088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.383366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.383399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.383585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.383618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.383870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.383903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.384129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.384162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.384449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.384483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.384745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.384778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.385045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.385079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.385228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.385260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.385415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.385448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.385735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.385768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.386037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.386072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.386298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.386331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.386636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.386893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.386928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.387142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.387175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.387323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.387357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.387685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.387917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.387951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.388254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.388286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.388543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.388578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.388732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.388766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.388975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.389009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.389288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.389323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.389620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.389653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.389859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.389892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.390209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.390478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.390511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.390866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.391074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.391107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.391364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.391403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.391681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.391713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.391999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.392034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.392288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.392558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.392591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.392775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.392818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.393026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.393059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.393259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.393292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.393574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.393607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.393801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.393981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.876 [2024-12-09 10:41:32.394014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.876 qpair failed and we were unable to recover it. 00:30:54.876 [2024-12-09 10:41:32.394159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.394192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.394344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.394376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.394578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.394611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.394750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.394783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.394925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.394958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.395066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.395099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.395309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.395342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.395632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.395664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.395890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.395925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.396062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.396095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.396306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.396340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.396620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.396652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.396961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.396996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.397210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.397415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.397716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.397932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.397966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.398102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.398135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.398395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.398428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.398663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.398942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.398976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.399189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.399223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.399483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.399516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.399693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.399898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.399934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.400137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.400170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.400388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.400421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.400694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.400729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.400926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.400959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.401212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.401250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.401477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.401510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.401705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.401739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.401946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.401980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.402288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.402321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.402615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.402648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.402872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.402905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.403113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.403147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.403340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.403374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.403507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.403541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.403843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.403878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.404103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.404137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.404344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.404380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.404684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.404717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.404975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.405011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.405208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.405242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.405384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.405419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.405546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.405580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.405772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.405805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.406083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.406118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.406404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.406437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.406721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.406754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.407013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.407251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.407513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.407714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.407748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.408056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.408089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.408381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.408414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.408685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.408718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.409018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.409053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.409249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.409541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.409575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.409768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.409800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.409962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.409995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.877 [2024-12-09 10:41:32.410219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.877 [2024-12-09 10:41:32.410253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.877 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.410366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.410398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.410676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.410709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.410940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.410974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.411255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.411287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.411432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.411465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.411767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.411806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.412061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.412095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.412383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.412416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.412653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.412686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.412963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.412997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.413208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.413241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.413514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.413547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.413830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.413862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.414153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.414186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.414391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.414425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.414678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.414711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.415027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.415061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.415222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.415255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.415383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.415415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.415753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.416057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.416091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.416295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.416328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.416597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.416630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.416890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.416924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.417200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.417232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.417441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.417473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.417669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.417703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.417923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.418180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.418214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.418522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.418737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.418769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.418974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.419007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.419221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.419255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.419488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.419521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.419704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.419736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.419932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.419966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.420218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.420251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.420551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.420583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.420881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.420915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.421117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.421150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.421428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.421462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.421715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.421747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.422054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.422282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.422315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.422652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.422685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.422957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.422997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.423131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.423164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.423423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.423457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.423767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.423800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.424086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.424119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.424354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.424388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.424669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.424702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.424952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.424985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.425319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.425585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.425617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.425874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.425908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.426124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.426157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.426444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.426722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.426755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.426988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.427023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.427274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.427308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.427511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.427543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.427853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.878 [2024-12-09 10:41:32.427889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.878 qpair failed and we were unable to recover it. 00:30:54.878 [2024-12-09 10:41:32.428142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.428176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.428376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.428408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.428688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.428722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.428978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.429013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.429330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.429362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.429655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.429688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.429935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.429969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.430225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.430258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.430534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.430568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.430836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.430872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.431088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.431119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.431361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.431566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.431600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.431875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.431908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.432212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.432246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.432388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.432420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.432673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.432706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.432908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.432942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.433138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.433170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.433396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.433630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.433663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.433936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.433969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.434213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.434252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.434487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.434791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.434831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.435038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.435070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.435353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.435387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.435667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.435700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.435984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.436018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.436302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.436335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.436606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.436638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.436826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.436859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.437056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.437089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.437343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.437375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.437583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.437615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.437911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.437945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.438166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.438372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.438403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.438682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.438716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.438973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.439010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.439280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.439312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.439573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.439607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.439915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.439950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.440179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.440214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.440426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.440460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.440738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.440769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.441061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.441094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.441325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.441358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.441569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.441602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.441911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.442167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.442201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.442548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.442580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.442798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.442839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.443050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.443084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.443235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.443267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.443518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.443550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.443758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.443795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.444063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.444096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.879 [2024-12-09 10:41:32.444230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.879 [2024-12-09 10:41:32.444263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.879 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.444552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.444586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.444802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.444849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.444992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.445025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.445232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.445271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.445573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.445606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.445864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.445899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.446053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.446086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.446270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.446301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.446602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.446635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.446874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.446908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.447116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.447148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.447347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.447379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.447638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.447671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.447967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.448001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.448233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.448266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.448473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.448506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.448732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.448763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.448965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.449255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.449587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.449619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.449799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.449840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.450047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.450080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.450368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.450491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.450523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.450730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.450764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.451032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.451066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.451363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.451395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.451670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.451703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.451964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.451997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.452300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.452332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.452616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.452650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.452887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.453091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.453124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.453407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.453440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.453743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.453775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.454069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.454310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.454343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.454658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.454690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.454959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.454993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.455253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.455286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.455554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.455768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.455799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.456050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.456083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.456292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.456330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.456606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.456639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.456870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.456905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.457163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.457413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.457445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.457753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.457787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.458065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.458098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.458373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.458405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.458660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.458693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.458939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.458973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.459197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.459229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.459506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.459538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.459828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.459862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.460138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.460170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.460674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.460707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.460999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.461032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.461248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.461281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.461544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.461575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.461773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.462079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.880 [2024-12-09 10:41:32.462113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.880 qpair failed and we were unable to recover it. 00:30:54.880 [2024-12-09 10:41:32.462314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.462346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.462619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.462651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.462910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.462945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.463153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.463185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.463317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.463349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.463606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.463639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.463912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.463948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.464153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.464185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.464382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.464415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.464693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.464725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.464922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.464956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.465260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.465591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.465621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.465896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.465930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.466219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.466253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.466551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.466582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.466788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.466832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.467092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.467379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.467411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.467531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.467568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.467847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.467880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.468161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.468194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.468461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.468494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.468793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.468834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.469097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.469129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.469358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.469391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.469579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.469611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.469862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.469895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.470100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.470132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.470407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.470439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.470689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.470720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.470983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.471016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.471216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.471250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.471458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.471491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.471714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.471745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.472040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.472076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.472276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.472308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.472621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.472652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.472871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.472906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.473106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.473137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.473336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.473368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.473576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.473609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.473882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.473914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.474189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.474221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.474365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.474399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.474598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.474932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.474966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.475244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.475278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.475574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.475606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.475878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.475913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.476243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.476278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.476415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.476448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.476718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.476751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.477040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.477076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.477347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.477380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.477679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.477710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.477894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.477929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.478202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.478234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.478486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.478785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.479085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.479117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.479323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.479354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.479639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.479672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.479912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.479945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.881 qpair failed and we were unable to recover it. 00:30:54.881 [2024-12-09 10:41:32.480239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.881 [2024-12-09 10:41:32.480273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.480543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.480575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.480847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.480880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.481073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.481106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.481307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.481340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.481533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.481565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.481768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.481800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.482007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.482341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.482372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.482672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.482703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.482987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.483022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.483303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.483335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.483535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.483568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.483874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.483908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.484178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.484210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.484403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.484434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.484714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.484747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.485214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.485245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.485716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.485747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.486032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.486066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.486324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.486359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.486504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.486535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.486824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.486859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.487154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.487187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.487389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.487420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.487742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.487773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.488067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.488101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.488326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.488358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.488611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.488642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.488903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.488938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.489227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.489408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.489440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.489696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.489729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.490009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.490049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.490330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.490363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.490641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.490674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.490963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.490995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.491332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.491536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.491568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.491846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.491879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.492157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.492189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.492476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.492509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.492704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.492737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.492999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.493033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.493242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.493275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.493548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.493580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.493859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.493892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.494186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.494219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.494423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.494457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.494649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.494681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.494964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.494999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.495279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.495312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.495538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.495570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.495830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.495866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.496156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.496189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.496500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.496784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.882 [2024-12-09 10:41:32.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.882 qpair failed and we were unable to recover it. 00:30:54.882 [2024-12-09 10:41:32.497138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.497171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.497362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.497396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.497526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.497560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.497827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.497862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.498168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.498200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.498493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.498526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.498800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.498843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.499048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.499081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.499295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.499328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.499444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.499475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.499753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.499785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.500099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.500134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.500357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.500389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.500603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.500634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.500841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.500875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.501159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.501192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.501491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.501528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.501820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.501855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.502064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.502096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.502365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.502398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.502554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.502587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.502839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.502873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.503007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.503040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.503250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.503283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.503580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.503882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.503917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.504184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.504218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.504513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.504547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.504756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.504788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.505108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.505142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.505444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.505476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.505687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.505719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.505985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.506021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.506312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.506344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.506565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.506827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.506862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.507143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.507176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.507401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.507433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.507586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.507620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.507922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.507955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.508176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.508209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.508412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.508446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.508691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.508919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.508954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.509265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.509299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.509497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.509529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.509724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.509755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.509978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.510013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.510265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.510296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.510474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.510505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.510710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.510743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.511016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.511344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.511376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.511695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.511730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.511955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.511990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.512242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.512275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.512477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.512510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.512712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.512745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.512937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.512970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.513162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.513195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.513395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.513426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.513628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.513661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.883 [2024-12-09 10:41:32.513917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.883 [2024-12-09 10:41:32.513951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.883 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.514140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.514172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.514294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.514325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.514619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.514919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.514953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.515135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.515167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.515469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.515502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.515799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.515849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.516051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.516084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.516369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.516404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.516684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.516920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.516954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.517173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.517207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.517405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.517437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.517701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.517733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.518030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.518065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.518341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.518373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.518592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.518623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.518828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.518863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.519126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.519161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.519368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.519399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.519682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.519722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.519934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.519968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.520247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.520279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.520500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.520533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.520663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.520694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.520895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.520927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.521185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.521217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.521514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.521545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.521827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.521861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.522088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.522121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.522396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.522428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.522715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.522747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.523029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.523349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.523382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.523599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.523631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.523922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.523957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.524101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.524134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.524439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.524759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.525041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.525074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.525257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.525289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.525407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.525441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.525715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.525747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.526013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.526048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.526349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.526382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.526592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.526625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.526859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.526894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.527119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.527153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.527408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.527441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.527740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.527998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.528034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.528331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.528364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.528582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.528614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.528884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.528918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.529254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.529451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.529483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.529611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.529645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.529853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.529888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.530163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.884 [2024-12-09 10:41:32.530195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.884 qpair failed and we were unable to recover it. 00:30:54.884 [2024-12-09 10:41:32.530484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.530517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.530799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.530848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.531176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.531460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.531494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.531698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.531731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.531927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.531961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.532189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.532222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.532476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.532510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.532731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.532763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.533052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.533087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.533339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.533372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.533556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.533589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.533863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.533899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.534184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.534484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.534516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.534825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.534860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2820997 Killed "${NVMF_APP[@]}" "$@" 00:30:54.885 [2024-12-09 10:41:32.535116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.535150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.535353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.535386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.535610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.535644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:54.885 [2024-12-09 10:41:32.535832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.535866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.536073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.536107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.536389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.885 [2024-12-09 10:41:32.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.536708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.885 [2024-12-09 10:41:32.536936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.536971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:54.885 [2024-12-09 10:41:32.537251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.537285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.537544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.537582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.537778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.537825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.538026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.538059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.538310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.538345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.538598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.538632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.538935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.538970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.539096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.539131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.539428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.539462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.539801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.540024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.540059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.540266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.540299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.540449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.540483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.540783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.540824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.540959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.541207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.541241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.541504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.541538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.541731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.541764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.541996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.542031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.542287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.542321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.542593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.542626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.542844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.542879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.543132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.543166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.543385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.543419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.543684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.543832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.543866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.544058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.544092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.544373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.544408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2821713 00:30:54.885 [2024-12-09 10:41:32.544608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.544644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2821713 00:30:54.885 [2024-12-09 10:41:32.544847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:54.885 [2024-12-09 10:41:32.544884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.545022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2821713 ']' 00:30:54.885 [2024-12-09 10:41:32.545318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.545355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.885 [2024-12-09 10:41:32.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.545591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.545806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.545856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.885 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.885 qpair failed and we were unable to recover it. 00:30:54.885 [2024-12-09 10:41:32.546142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.885 [2024-12-09 10:41:32.546179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.546395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.546432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.886 [2024-12-09 10:41:32.546636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.546673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:54.886 [2024-12-09 10:41:32.546938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.546978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.547205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.547238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.547429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.547463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.547720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.547753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.547970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.548008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.548171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.548206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.548469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.548502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.548645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.548678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.548890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.548927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.549180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.549213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.549410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.549446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.549752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.549788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.550030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.550064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.550296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.550329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.550576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.550611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.550887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.550924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.551147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.551180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.551394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.551431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.551634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.551667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.551932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.551970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.552255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.552288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.552513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.552547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.552768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.552801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.553074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.553109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.553362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.553394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.553591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.553626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.553834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.553875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.554082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.554114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.554405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.554440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.554714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.554976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.555011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.555246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.555521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.555553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.555839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.555874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.556127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.556161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.556394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.556428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.556674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.556708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.556984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.557019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.557227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.557259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.557540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.557574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.557785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.557829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.558143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.558175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.558370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.558403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.558680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.558713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.558949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.559261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.559536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.559781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.559824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.560050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.560083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.560283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.560316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.560615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.560648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.560900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.560934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.561165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.561198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.561437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.561471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.561747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.561983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.562241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.562277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.562435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.562467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.562736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.562769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.886 [2024-12-09 10:41:32.563017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.886 qpair failed and we were unable to recover it. 00:30:54.886 [2024-12-09 10:41:32.563246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.563279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.563619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.563907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.563944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.564168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.564201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.564528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.564562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.564781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.564824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.565039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.565079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.565362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.565397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.565676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.565710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.566027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.566229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.566262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.566536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.566569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.566714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.566747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.567021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.567055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.567301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.567631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.567775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.567821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.568078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.568382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.887 [2024-12-09 10:41:32.568416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:54.887 qpair failed and we were unable to recover it. 00:30:54.887 [2024-12-09 10:41:32.568691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.568724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.568928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.568964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.569216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.569250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.569539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.569572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.569792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.569839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.570080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.570113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.570328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.570360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.570562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.570596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.570875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.570910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.571162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.571194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.571412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.571446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.571710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.571744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.571956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.571991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.572284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.572569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.572603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.572858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.572893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.573223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.573256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-09 10:41:32.573482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-09 10:41:32.573515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.573822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.573856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.574119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.574154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.574368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.574402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.574541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.574574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.574721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.574756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.574971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.575007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.575199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.575233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.575488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.575521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.575771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.575804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.575975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.576014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.576217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.576251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.576529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.576562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.576772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.576805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.577085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.577121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.577240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.577273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.577549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.577581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.577783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.577831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.577983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.578017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.578206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.578238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.578435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.578469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.578611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.578643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.579083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.579116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.579259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.579291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.579490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.579524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.579751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.579786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.579996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.580031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.580236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.580270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.580529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.580562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.580686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.580719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.580902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.580937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.581124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.581158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.581361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.581393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.581701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.581926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.581962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.582077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.582110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.582291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.582324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.582542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.582577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.582688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.582720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.582917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.582951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.583173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.583204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.583334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.583629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.583661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.583874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.583909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.584108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.584141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.584423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.584456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.584676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.584710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.584999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.585032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.585397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.585436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.585715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.585748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.585977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.586010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.586142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.586174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.586317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.586599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.586831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.586866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.587116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.587149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.587415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.587676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.587708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-09 10:41:32.587959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-09 10:41:32.587994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.588266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.588299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.588558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.588592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.588857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.588891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.589083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.589116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.589469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.589599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.589632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.589897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.589932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.590214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.590247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.590374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.590407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.590525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.590558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.590758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.590800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.591085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.591119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.591370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.591403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.591625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.591658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.591915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.591949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.592151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.592184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.592417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.592450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.592662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.592695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.592842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.592877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.593083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.593115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.593330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.593547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.593582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.593775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.594042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.594077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.594328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.594360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.594563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.594596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.594735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.594768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.594988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.595334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.595367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.595497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.595536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.595663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.595696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.595853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.595899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.596081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.596115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.596371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.596406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.596614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.596647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.596787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.596831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.596981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.597014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.597153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.597186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.597436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.597469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.597595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.597628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.597823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.597858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.598063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.598096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.598285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.598318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.598602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.598636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.598893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.598927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.599048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.599081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.599235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.599269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.599460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.599493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.599652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.600161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.600407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.600440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.600664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.600698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.600837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.600871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.601079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.601112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.601317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.601352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.601398] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:30:55.164 [2024-12-09 10:41:32.601462] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.164 [2024-12-09 10:41:32.601555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.601589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.601714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.601746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-09 10:41:32.601941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-09 10:41:32.601974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.602221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.602253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.602463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.602737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.602770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.602958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.603180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.603213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.603403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.603439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.603662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.603697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.603893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.603929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.604183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.604218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.604403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.604439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.604749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.604785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.604921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.604957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.605074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.605107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.605248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.605283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.605535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.605569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.605771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.605806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.605940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.605976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.606084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.606118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.606397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.606642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.606676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.606794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.606839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.606954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.606985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.607182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.607226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.607371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.607405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.607661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.607696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.607848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.607884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.608112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.608396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.608429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.608635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.608669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.608779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.608821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.608929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.608962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.609114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.609148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.609348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.609384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.609661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.609696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.609901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.609937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.610126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.610441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.610477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.610685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.610719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.610833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.610869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.611175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.611210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.611420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.611455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.611666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.611700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.611909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.611947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.612142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.612176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.612427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.612462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.612658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.612692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.612824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.612860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.613158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.613192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.613406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.613440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.613589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.613623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.613755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.613790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.614020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.614217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.614252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-09 10:41:32.614405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-09 10:41:32.614439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.614688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.614722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.614978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.615015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.615288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.615323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.615546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.615581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.615772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.615821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.616118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.616153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.616282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.616317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.616507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.616541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.616735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.616774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.616909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.616945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.617140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.617174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.617312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.617349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.617616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.617649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.617860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.618128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.618257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.618290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.618584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.618709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.618752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.618955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.618990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.619181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.619215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.619404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.619438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.619613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.619648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.619781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.619834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.620026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.620061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.620267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.620302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.620515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.620549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.620741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.620777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.620928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.620965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.621239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.621274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.621520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.621556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.621738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.621919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.621955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.622096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.622130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.622254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.622288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.622561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.622595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.622729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.622768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.622977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.623013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.623206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.623241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.623434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.623469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.623649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.623693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.623842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.623877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.624122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.624156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.624336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.624371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.624563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.624599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.624806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.624851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.625075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.625110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.625323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.625359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.625551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.625585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.625767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.625802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.626141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.626174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.626309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.626342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.626666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.626847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.626885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.627157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.627192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.627317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.627351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.627472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.627516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.627759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.627794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.627994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.628028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.628273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.628307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.628446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.628480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-09 10:41:32.628666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-09 10:41:32.628700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.628888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.628924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.629175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.629209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.629334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.629368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.629639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.629674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.629931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.629967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.630153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.630188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.630382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.630417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.630614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.630649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.630917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.630992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.631256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.631295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.631487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.631522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.631734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.631769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.631971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.632005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.632277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.632311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.632439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.632483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.632675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.632710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.632841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.632876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.632995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.633027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.633166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.633200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.633473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.633507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.633681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.633833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.633866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.633983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.634018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.634206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.634240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.634779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.634820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.635022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.635057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.635305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.635338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.635538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.635571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.635748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.635781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.636059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.636093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.636217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.636250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.636428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.636460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.636672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.636705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.636882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.636917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.637093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.637126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.637344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.637586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.637619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.637865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.637900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.638092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.638130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.638248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.638281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.638536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.638576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.638826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.638862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.639051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.639084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.639334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.639367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.639540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.639574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.639698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.639928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.639962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.640142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.640175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.640372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.640405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.640547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.640581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.640708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.640742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.640932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.640967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.641106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.641139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.641313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.641347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.641550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.641584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.641748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.641931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.641967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.642212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.167 [2024-12-09 10:41:32.642245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.167 qpair failed and we were unable to recover it. 00:30:55.167 [2024-12-09 10:41:32.642491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.642525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.642638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.642671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.642793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.642835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.643110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.643143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.643419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.643452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.643670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.643703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.643999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.644033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.644170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.644204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.644467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.644500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.644696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.644730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.644979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.645013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.645229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.645263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.645515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.645548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.645675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.645708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.645997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.646033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.646221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.646255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.646351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.646592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.646625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.646869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.646904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.647194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.647227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.647339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.647372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.647514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.647547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.647664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.647696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.647936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.648175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.648338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.648496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.648950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.648986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.649111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.649146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.649322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.649355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.649598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.649631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.649765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.649798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.650055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.650089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.650273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.650306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.650497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.650530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.650774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.650824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.651081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.651116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.651247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.651547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.651581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.651762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.651796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.652080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.652114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.652405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.652615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.652648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.652773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.652818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.653045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.653283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.653317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.653450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.653484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.653653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.653687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.653866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.654084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.654118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.654294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.654328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.654517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.654552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.654685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.654718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.654838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.654873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.168 [2024-12-09 10:41:32.655119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.168 [2024-12-09 10:41:32.655151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.168 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.655353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.655387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.655631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.655665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.655797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.655849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.655964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.655995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.656123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.656157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.656401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.656435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.656560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.656598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.656796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.656848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.656964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.656997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.657177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.657210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.657407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.657441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.657637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.657670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.657853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.658066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.658099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.658217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.658250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.658457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.658489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.658595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.658628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.658828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.659108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.659141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.659282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.659316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.659477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.659522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.659643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.659677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.659950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.659985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.660227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.660261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.660471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.660504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.660696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.660728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.661006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.661040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.661175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.661208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.661478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.661512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.661646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.661678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.661817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.661852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.662052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.662086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.662269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.662438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.662470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.662717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.662750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.662954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.662991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.663166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.663377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.663412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.663652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.663686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.663821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.663856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.664102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.664135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.664252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.664285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.664471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.664505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.664748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.664781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.665028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.665101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.665260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.665298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.665573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.665608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.665750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.665787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.666048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.666203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.666236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.666516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.666550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.666670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.666705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.666909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.666946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.667124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.667156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.667321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.667556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.667589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.667777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.667817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.667962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.667995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.668237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.668269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.668475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.169 [2024-12-09 10:41:32.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.169 qpair failed and we were unable to recover it. 00:30:55.169 [2024-12-09 10:41:32.668774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.669092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.669126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.669368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.669535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.669750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.669783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.670005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.670039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.670280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.670314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.670554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.670588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.670729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.670763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.671021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.671055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.671290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.671323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.671513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.671752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.671786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.672052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.672085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.672294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.672328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.672532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.672564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.672737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.672770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.672952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.672987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.673171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.673206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.673376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.673579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.673613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.673783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.673827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.674097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.674132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.674287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.674465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.674498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.674740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.674774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.674924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.674963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.675088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.675127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.675258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.675291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.675419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.675453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.675642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.675676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.675846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.675882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.676090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.676125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.676310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.676343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.676587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.676621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.676804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.676849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.677074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.677107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.677299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.677333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.677588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.677830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.677866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.678065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.678099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.678351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.678385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.678504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.678540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.678662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.678875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.678911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.679181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.679216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.679350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.679384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.679513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.679546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.679761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.679797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.679990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.680026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.680302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.680477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.680510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.680623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.680657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.680778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.680821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.681020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.681055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.681243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.681277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.681544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.681579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.681777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.681836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.682051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.682087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.682365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.682399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.170 qpair failed and we were unable to recover it. 00:30:55.170 [2024-12-09 10:41:32.682641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.170 [2024-12-09 10:41:32.682675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.682868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.682905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.683098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.683132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.683319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.683353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.683563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.683597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.683724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.683758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.683984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.684020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.684196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.684237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.684561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.684758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.684792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.684925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.684960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.685201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.685235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.685438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.685472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.685651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.685685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.685794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.686083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.686122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.686230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.686265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.686383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.686417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.686539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.686573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.686828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.686864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.687110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.687144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.687318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.687455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.687490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.687684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.687719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.687915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.687951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.688170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.688205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.688446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.688481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.688669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.688704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.688824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.688859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.689050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.689085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.689262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.689298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.689416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.689450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.689638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.689875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.689911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.690063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.690327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.690360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.690484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.690518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.690729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.690764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.690949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.690974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.171 [2024-12-09 10:41:32.690985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.691171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.691205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.691397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.691430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.691617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.691651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.691780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.691832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.692021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.692054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.692240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.692276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.692460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.692494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.692684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.692736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.692922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.692956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.693197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.693234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.693427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.693461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.693639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.693673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.693928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.693964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.694084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.694371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.694405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.694662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.171 [2024-12-09 10:41:32.694696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.171 qpair failed and we were unable to recover it. 00:30:55.171 [2024-12-09 10:41:32.694801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.694843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.694973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.695007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.695184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.695216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.695420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.695454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.695642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.695675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.695884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.695924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.696114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.696148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.696262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.696296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.696535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.696568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.696685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.696719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.696933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.696968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.697088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.697314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.697348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.697534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.697567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.697683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.697716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.698023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.698059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.698236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.698268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.698389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.698422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.698611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.698645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.698769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.698803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.699010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.699043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.699279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.699313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.699511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.699545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.699717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.699752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.700032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.700164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.700198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.700373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.700407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.700671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.700706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.700841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.700877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.701064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.701099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.701283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.701317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.701505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.701540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.701789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.701836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.702032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.702066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.702185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.702219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.702421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.702455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.702630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.702665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.702908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.702943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.703114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.703325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.703467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.703608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.703785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.703988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.704022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.704198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.704231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.704439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.704479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.704745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.704923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.704957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.705144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.705178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.705443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.705477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.705593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.705626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.705864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.705899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.706163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.706197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.706381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.706414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.706598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.706632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.706831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.706878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.707071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.707106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.707296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.707330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.707469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.707503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.707749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.707782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.172 [2024-12-09 10:41:32.707930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.172 [2024-12-09 10:41:32.707965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.172 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.708145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.708180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.708318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.708352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.708464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.708499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.708763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.708797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.709076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.709109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.709290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.709584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.709617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.709866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.709901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.710035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.710069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.710268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.710302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.710511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.710545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.710670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.710703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.710889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.710924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.711037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.711072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.711270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.711305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.711409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.711442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.711631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.711666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.711854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.711889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.712131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.712166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.712352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.712385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.712505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.712540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.712672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.712706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.712903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.712938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.713110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.713143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.713260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.713299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.713564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.713598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.713838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.713875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.714103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.714227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.714261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.714469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.714503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.714749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.714782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.714976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.715010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.715191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.715225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.715456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.715612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.715646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.715773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.715820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.716037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.716071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.716311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.716344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.716562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.716595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.716779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.716819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.717014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.717047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.717236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.717269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.717535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.717568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.717764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.717798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.717995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.718029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.718198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.718421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.718455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.718638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.718673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.718855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.718891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.719110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.719253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.719441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.719614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.719771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.719976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.720193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.720402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.720561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.720695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.173 [2024-12-09 10:41:32.720905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.173 [2024-12-09 10:41:32.720948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.173 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.721151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.721185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.721421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.721455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.721711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.721745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.721928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.721963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.722153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.722193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.722431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.722467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.722642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.722676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.722791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.722833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.723019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.723052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.723190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.723223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.723401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.723435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.723649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.723683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.723925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.723960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.724173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.724206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.724344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.724377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.724562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.724596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.724716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.724750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.724933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.724968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.725166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.725201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.725383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.725416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.725603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.725637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.725906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.725943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.726125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.726158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.726293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.726328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.726443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.726477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.726646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.726680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.726970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.727005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.727134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.727167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.727338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.727371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.727634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.727668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.727844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.728004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.728042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.728160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.728194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.728442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.728476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.728670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.728709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.728911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.728952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.729240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.729279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.729410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.729443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.729713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.729747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.729869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.729906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.730179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.730216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.730413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.730450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.730645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.730682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.730924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.730963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.731122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.731314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.731347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.731541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.731555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.174 [2024-12-09 10:41:32.731575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 [2024-12-09 10:41:32.731589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.174 [2024-12-09 10:41:32.731597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.731604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.174 [2024-12-09 10:41:32.731609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.174 [2024-12-09 10:41:32.731748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.731781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.731989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.732022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.732207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.174 [2024-12-09 10:41:32.732241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.174 qpair failed and we were unable to recover it. 00:30:55.174 [2024-12-09 10:41:32.732424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.732458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.732573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.732614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.732854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.732889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:55.175 [2024-12-09 10:41:32.733448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:55.175 [2024-12-09 10:41:32.733482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:55.175 [2024-12-09 10:41:32.733493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:55.175 [2024-12-09 10:41:32.733620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.733937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.733969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.734097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.734129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.734340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.734374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.734481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.734515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.734629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.734671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.734869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.734904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.735091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.735125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.735246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.735278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.735505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.735700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.735734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.735864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.735899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.736070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.736219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.736370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.736642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.736862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.736976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.737182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.737412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.737581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.737728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.737982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.738164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.738198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.738436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.738476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.738685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.738854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.739171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.739205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.739409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.739443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.739633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.739667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.739801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.739845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.740010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.740181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.740214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.740426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.740460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.740697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.740826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.740860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.740988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.741210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.741372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.741511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.741653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.741929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.741964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.742099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.742133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.742253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.742285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.742396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.742430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.742624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.742658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.742787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.742842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.743099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.743132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.743385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.743629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.743662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.743833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.744091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.744125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.744373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.175 [2024-12-09 10:41:32.744408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.175 qpair failed and we were unable to recover it. 00:30:55.175 [2024-12-09 10:41:32.744697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.744731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.745000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.745038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.745306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.745341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.745549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.745583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.745814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.745849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.746138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.746173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.746374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.746410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.746597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.746632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.746898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.746934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.747131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.747165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.747341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.747375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.747650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.747691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.747863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.747897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.748136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.748170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.748430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.748465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.748702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.748737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.749005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.749042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.749244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.749279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.749523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.749558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.749860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.749896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.750147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.750182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.750460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.750495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.750771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.750806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.751078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.751113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.751376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.751410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.751681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.751716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.751955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.752174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.752496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.752533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.752723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.752756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.753021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.753057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.753316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.753351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.753590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.753625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.753889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.753927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.754118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.754151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.754326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.754361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.754479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.754513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.754642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.754676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.754871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.754906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.755099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.755134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.755321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.755355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.755544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.755577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.755863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.755898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.756021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.756056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.756320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.756354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.756565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.756599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.756894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.756930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.757188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.757223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.757545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.757819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.757854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.758041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.758075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.758339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.758378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.758583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.758617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.758857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.758893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.759162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.759196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.759397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.759431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.759675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.759709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.759908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.759943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.760119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.760154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.760329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.760363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.760633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.176 [2024-12-09 10:41:32.760667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.176 qpair failed and we were unable to recover it. 00:30:55.176 [2024-12-09 10:41:32.760838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.760873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.761135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.761169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.761340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.761374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.761589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.761623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.761889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.762201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.762537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.762859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.762894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.763095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.763130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.763368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.763402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.763609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.763644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.763900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.763935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.764121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.764154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.764419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.764727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.764761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.764985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.765020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.765258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.765293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.765490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.765524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.765760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.765795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.766131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.766166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.766296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.766329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.766467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.766681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.766715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.766877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.767063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.767096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.767313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.767348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.767601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.767635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.767904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.767940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.768242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.768528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.768562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.768745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.768786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.769034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.769068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.769313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.769346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.769473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.769508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.769704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.769736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.769933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.769967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.770233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.770266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.770402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.770435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.770686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.770719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.770961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.770995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.771323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.771356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.771620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.771654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.771824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.772065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.772098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.772319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.772450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.772484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.772610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.772644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.772893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.772928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.773109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.773142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.773413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.773707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.773740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.773914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.773950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.774200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.774233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.774437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.774471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.774735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.774770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.775025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.177 [2024-12-09 10:41:32.775061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.177 qpair failed and we were unable to recover it. 00:30:55.177 [2024-12-09 10:41:32.775303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.775336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.775553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.775586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.775722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.775756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.775972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.776007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.776307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.776340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.776591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.776625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.776824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.776859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.777059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.777094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.777358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.777391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.777633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.777669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.777845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.777881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.778144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.778177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.778352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.778386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.778662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.778697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.778831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.779057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.779092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.779267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.779301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.779566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.779831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.780082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.780118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.780309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.780344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.780517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.780551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.780825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.780859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.781122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.781156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.781363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.781397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.781637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.781670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.781932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.781968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.782208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.782243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.782507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.782540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.782736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.782771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.782972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.783232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.783472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.783507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.783648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.783682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.783857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.783893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.784075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.784108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.784397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.784430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.784683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.784717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.784900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.784935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.785131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.785164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.785359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.785392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.785640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.785675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.785917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.785953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.786194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.786227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.786347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.786383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.786623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.786908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.786944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.787150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.787338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.787562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.787595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.787843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.787879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.788207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.788469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.788503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.788722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.788757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.788941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.788990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.789253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.789285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.789549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.789583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.789773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.789807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.790005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.790039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.790230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.790263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.178 [2024-12-09 10:41:32.790523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.178 [2024-12-09 10:41:32.790557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.178 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.790797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.790860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.791099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.791132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.791419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.791453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.791677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.791901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.791936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.792196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.792229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.792356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.792388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.792590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.792624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.792886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.792921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.793221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.793459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.793492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.793707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.794036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.794256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.794289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.794475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.794508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.794768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.794801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.795022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.795055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.795244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.795277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.795456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.795489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.795667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.795700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.796017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.796106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5884000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.796363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.796420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.796699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.796734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.796918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.796955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.797213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.797247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.797437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.797747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.797876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.797911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.798107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.798140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.798325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.798359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.798540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.798754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.798787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.798988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.799022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.799165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.799199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.799446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.799479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.799709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.799994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.800028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.800198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.800232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.800353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.800384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.800554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.800587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.800830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.800865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.800981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.801014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.801231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.801264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.801475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.801508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.801624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.801658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.801846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.801880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.802057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.802090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.802495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5890000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.802794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.802841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.802962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.802995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.803200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.803234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.803472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.803505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.803767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.803799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.804022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.804054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.804338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.804371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.804632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.804664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.804786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.804837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.805078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.179 qpair failed and we were unable to recover it. 00:30:55.179 [2024-12-09 10:41:32.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.179 [2024-12-09 10:41:32.805355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.805544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.805577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.805763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.805802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.806081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.806114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.806316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.806349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.806589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.806623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.806824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.806858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.806977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.807008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.807257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.807290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.807561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.807594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.807793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.807837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.807975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.808008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.808216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.808249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.808435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.808468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.808591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.808842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.808878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.809081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.809114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.809401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.809435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.809726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.809759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.810058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.810092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.810345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.810379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.810578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.810611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.810851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.810885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.811015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.811048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.811261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.811295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.811485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.811518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.811705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.811738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.812007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.812041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.812318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.812350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.812628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.812666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.812942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.812978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.813166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.813324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.813358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.813594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.813627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.813886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.814102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.814134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.814308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.814341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.814527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.814560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.814755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.814788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.815041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.815075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.815189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.815222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.815392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.815426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.815616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.815649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.815843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.815879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.816061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.816095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.816311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.816490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.816695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.816729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.816946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.816982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.817165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.817198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.817425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.817668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.817701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.817873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.817907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.818035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.818069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.818268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.818301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.818488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.818520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.818648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.818686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.818904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.180 [2024-12-09 10:41:32.818938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.180 qpair failed and we were unable to recover it. 00:30:55.180 [2024-12-09 10:41:32.819137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.819169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.819277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.819310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.819487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.819521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.819831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.819866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.820053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.820085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.820285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.820318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.820440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.820473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.820692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.820725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.820909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.820944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.821184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.821217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.821359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.821392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.821628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.821661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.821932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.821967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.822175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.822210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.822391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.822424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.822611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.822644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.822859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.822894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.823096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.823129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.823300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.823334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.823502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.823536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.823740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.823773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.823970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.824004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.824243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.824277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.824480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.824514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.824766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.824800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.825049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.825214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.825365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.825529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.825760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.825983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.826018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.826259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.826559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.826592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.826713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.826747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.826947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.826981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.827178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.827211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.827411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.827444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.827563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.827595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.827726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.827760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.828018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.828052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.828231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.828265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.828448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.828481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.828610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.828954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.829212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.829246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.829484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.829518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.829691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.829724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.829928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.829963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.830219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.830254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.830504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.830537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.830826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.830861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.831074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.831111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.831338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.831371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.831569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.831602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.831842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.831877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.831993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.832026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.832216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.832250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.832358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.832391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.832652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.832686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.832890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.832925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.833090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.833123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.181 [2024-12-09 10:41:32.833385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.181 [2024-12-09 10:41:32.833418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.181 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.833587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.833622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.833860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.833895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.834143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.834176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.834418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.834452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebe0 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 A controller has encountered a failure and is being reset. 00:30:55.182 [2024-12-09 10:41:32.834655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.834729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.834977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.835018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.835269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.835303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.835597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.835631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.835848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.835884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.836124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.836157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.836425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.836459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.836746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.836780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5888000b90 with addr=10.0.0.2, port=4420 00:30:55.182 qpair failed and we were unable to recover it. 00:30:55.182 [2024-12-09 10:41:32.837041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.182 [2024-12-09 10:41:32.837102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112cb20 with addr=10.0.0.2, port=4420 00:30:55.182 [2024-12-09 10:41:32.837131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112cb20 is same with the state(6) to be set 00:30:55.182 [2024-12-09 10:41:32.837165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112cb20 (9): Bad file descriptor 00:30:55.182 [2024-12-09 10:41:32.837193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:55.182 [2024-12-09 10:41:32.837214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:55.182 [2024-12-09 10:41:32.837237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:55.182 Unable to reset the controller. 00:30:55.182 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.182 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:55.182 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.182 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.182 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 Malloc0 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 [2024-12-09 10:41:32.909893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 [2024-12-09 10:41:32.938120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.440 10:41:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2821021 00:30:56.373 Controller properly reset. 00:31:01.759 Initializing NVMe Controllers 00:31:01.759 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:01.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:01.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:01.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:01.759 Initialization complete. Launching workers. 00:31:01.759 Starting thread on core 1 00:31:01.759 Starting thread on core 2 00:31:01.759 Starting thread on core 3 00:31:01.759 Starting thread on core 0 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:01.759 00:31:01.759 real 0m10.641s 00:31:01.759 user 0m34.582s 00:31:01.759 sys 0m6.019s 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:01.759 ************************************ 00:31:01.759 END TEST nvmf_target_disconnect_tc2 00:31:01.759 ************************************ 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.759 rmmod nvme_tcp 00:31:01.759 rmmod nvme_fabrics 00:31:01.759 rmmod nvme_keyring 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2821713 ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2821713 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2821713 ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2821713 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821713 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821713' 00:31:01.759 killing process with pid 2821713 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2821713 00:31:01.759 10:41:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2821713 00:31:01.759 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.759 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.759 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.759 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:01.759 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.760 10:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.663 00:31:03.663 real 0m19.432s 00:31:03.663 user 1m1.439s 00:31:03.663 sys 0m11.179s 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.663 ************************************ 00:31:03.663 END TEST nvmf_target_disconnect 00:31:03.663 ************************************ 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:03.663 00:31:03.663 real 5m54.759s 00:31:03.663 user 10m53.683s 00:31:03.663 sys 1m59.745s 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.663 10:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.663 ************************************ 00:31:03.663 END TEST nvmf_host 00:31:03.663 ************************************ 00:31:03.663 10:41:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:03.663 10:41:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:03.663 10:41:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:03.663 10:41:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:03.663 10:41:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.663 10:41:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.663 ************************************ 00:31:03.663 START TEST nvmf_target_core_interrupt_mode 00:31:03.663 ************************************ 00:31:03.663 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:03.922 * Looking for test storage... 00:31:03.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:03.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.922 --rc genhtml_branch_coverage=1 00:31:03.922 --rc genhtml_function_coverage=1 00:31:03.922 --rc genhtml_legend=1 00:31:03.922 --rc geninfo_all_blocks=1 00:31:03.922 --rc geninfo_unexecuted_blocks=1 00:31:03.922 00:31:03.922 ' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:03.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.922 --rc genhtml_branch_coverage=1 00:31:03.922 --rc genhtml_function_coverage=1 00:31:03.922 --rc genhtml_legend=1 00:31:03.922 --rc geninfo_all_blocks=1 00:31:03.922 --rc geninfo_unexecuted_blocks=1 00:31:03.922 00:31:03.922 ' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:03.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.922 --rc genhtml_branch_coverage=1 00:31:03.922 --rc genhtml_function_coverage=1 00:31:03.922 --rc genhtml_legend=1 00:31:03.922 --rc geninfo_all_blocks=1 00:31:03.922 --rc geninfo_unexecuted_blocks=1 00:31:03.922 00:31:03.922 ' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:03.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.922 --rc genhtml_branch_coverage=1 00:31:03.922 --rc genhtml_function_coverage=1 00:31:03.922 --rc genhtml_legend=1 00:31:03.922 --rc geninfo_all_blocks=1 00:31:03.922 --rc geninfo_unexecuted_blocks=1 00:31:03.922 00:31:03.922 ' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.922 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:03.923 ************************************ 00:31:03.923 START TEST nvmf_abort 00:31:03.923 ************************************ 00:31:03.923 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:03.923 * Looking for test storage... 00:31:04.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.182 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:04.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.183 --rc genhtml_branch_coverage=1 00:31:04.183 --rc genhtml_function_coverage=1 00:31:04.183 --rc genhtml_legend=1 00:31:04.183 --rc geninfo_all_blocks=1 00:31:04.183 --rc geninfo_unexecuted_blocks=1 00:31:04.183 00:31:04.183 ' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.183 --rc genhtml_branch_coverage=1 00:31:04.183 --rc genhtml_function_coverage=1 00:31:04.183 --rc genhtml_legend=1 00:31:04.183 --rc geninfo_all_blocks=1 00:31:04.183 --rc geninfo_unexecuted_blocks=1 00:31:04.183 00:31:04.183 ' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.183 --rc genhtml_branch_coverage=1 00:31:04.183 --rc genhtml_function_coverage=1 00:31:04.183 --rc genhtml_legend=1 00:31:04.183 --rc geninfo_all_blocks=1 00:31:04.183 --rc geninfo_unexecuted_blocks=1 00:31:04.183 00:31:04.183 ' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.183 --rc genhtml_branch_coverage=1 00:31:04.183 --rc genhtml_function_coverage=1 00:31:04.183 --rc genhtml_legend=1 00:31:04.183 --rc geninfo_all_blocks=1 00:31:04.183 --rc geninfo_unexecuted_blocks=1 00:31:04.183 00:31:04.183 ' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.183 10:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.752 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:10.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:10.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:10.753 Found net devices under 0000:86:00.0: cvl_0_0 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:10.753 Found net devices under 0000:86:00.1: cvl_0_1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:31:10.753 00:31:10.753 --- 10.0.0.2 ping statistics --- 00:31:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.753 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:10.753 00:31:10.753 --- 10.0.0.1 ping statistics --- 00:31:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.753 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2826259 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2826259 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2826259 ']' 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:10.753 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:10.754 [2024-12-09 10:41:47.736605] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:10.754 [2024-12-09 10:41:47.737556] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:31:10.754 [2024-12-09 10:41:47.737595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.754 [2024-12-09 10:41:47.818433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:10.754 [2024-12-09 10:41:47.858528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.754 [2024-12-09 10:41:47.858563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.754 [2024-12-09 10:41:47.858570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.754 [2024-12-09 10:41:47.858577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.754 [2024-12-09 10:41:47.858582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.754 [2024-12-09 10:41:47.859925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.754 [2024-12-09 10:41:47.860031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.754 [2024-12-09 10:41:47.860032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.754 [2024-12-09 10:41:47.927391] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.754 [2024-12-09 10:41:47.928094] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:10.754 [2024-12-09 10:41:47.928297] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.754 [2024-12-09 10:41:47.928390] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 [2024-12-09 10:41:48.620753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 Malloc0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 Delay0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 [2024-12-09 10:41:48.712715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.013 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:11.273 [2024-12-09 10:41:48.801895] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:13.804 Initializing NVMe Controllers 00:31:13.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:13.804 controller IO queue size 128 less than required 00:31:13.804 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:13.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:13.804 Initialization complete. Launching workers. 00:31:13.804 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37779 00:31:13.804 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37836, failed to submit 66 00:31:13.804 success 37779, unsuccessful 57, failed 0 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.804 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.804 rmmod nvme_tcp 00:31:13.804 rmmod nvme_fabrics 00:31:13.804 rmmod nvme_keyring 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2826259 ']' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2826259 ']' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826259' 00:31:13.804 killing process with pid 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2826259 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.804 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.709 00:31:15.709 real 0m11.774s 00:31:15.709 user 0m10.603s 00:31:15.709 sys 0m5.737s 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.709 ************************************ 00:31:15.709 END TEST nvmf_abort 00:31:15.709 ************************************ 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.709 ************************************ 00:31:15.709 START TEST nvmf_ns_hotplug_stress 00:31:15.709 ************************************ 00:31:15.709 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:15.968 * Looking for test storage... 00:31:15.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.968 --rc genhtml_branch_coverage=1 00:31:15.968 --rc genhtml_function_coverage=1 00:31:15.968 --rc genhtml_legend=1 00:31:15.968 --rc geninfo_all_blocks=1 00:31:15.968 --rc geninfo_unexecuted_blocks=1 00:31:15.968 00:31:15.968 ' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.968 --rc genhtml_branch_coverage=1 00:31:15.968 --rc genhtml_function_coverage=1 00:31:15.968 --rc genhtml_legend=1 00:31:15.968 --rc geninfo_all_blocks=1 00:31:15.968 --rc geninfo_unexecuted_blocks=1 00:31:15.968 00:31:15.968 ' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.968 --rc genhtml_branch_coverage=1 00:31:15.968 --rc genhtml_function_coverage=1 00:31:15.968 --rc genhtml_legend=1 00:31:15.968 --rc geninfo_all_blocks=1 00:31:15.968 --rc geninfo_unexecuted_blocks=1 00:31:15.968 00:31:15.968 ' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.968 --rc genhtml_branch_coverage=1 00:31:15.968 --rc genhtml_function_coverage=1 00:31:15.968 --rc genhtml_legend=1 00:31:15.968 --rc geninfo_all_blocks=1 00:31:15.968 --rc geninfo_unexecuted_blocks=1 00:31:15.968 00:31:15.968 ' 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.968 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.969 10:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:22.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:22.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:22.542 Found net devices under 0000:86:00.0: cvl_0_0 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.542 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:22.543 Found net devices under 0000:86:00.1: cvl_0_1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:31:22.543 00:31:22.543 --- 10.0.0.2 ping statistics --- 00:31:22.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.543 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:22.543 00:31:22.543 --- 10.0.0.1 ping statistics --- 00:31:22.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.543 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2830260 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2830260 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2830260 ']' 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.543 10:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:22.543 [2024-12-09 10:41:59.577333] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.543 [2024-12-09 10:41:59.578242] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:31:22.543 [2024-12-09 10:41:59.578274] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.543 [2024-12-09 10:41:59.658105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:22.543 [2024-12-09 10:41:59.698982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.543 [2024-12-09 10:41:59.699019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.543 [2024-12-09 10:41:59.699027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.544 [2024-12-09 10:41:59.699032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.544 [2024-12-09 10:41:59.699037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.544 [2024-12-09 10:41:59.700349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.544 [2024-12-09 10:41:59.700453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.544 [2024-12-09 10:41:59.700453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.544 [2024-12-09 10:41:59.767655] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.544 [2024-12-09 10:41:59.768395] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:22.544 [2024-12-09 10:41:59.768459] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:22.544 [2024-12-09 10:41:59.768612] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:22.802 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:23.060 [2024-12-09 10:42:00.613229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.061 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:23.319 10:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.320 [2024-12-09 10:42:01.009702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.320 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.578 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:23.837 Malloc0 00:31:23.837 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:24.095 Delay0 00:31:24.095 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.352 10:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:24.352 NULL1 00:31:24.352 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:24.610 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:24.610 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2830869 00:31:24.610 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:24.610 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.998 Read completed with error (sct=0, sc=11) 00:31:25.998 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.999 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:25.999 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:26.256 true 00:31:26.256 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:26.256 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.189 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.189 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:27.189 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:27.447 true 00:31:27.447 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:27.447 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.705 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.963 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:27.963 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:27.963 true 00:31:27.963 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:27.963 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:29.335 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:29.335 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:29.593 true 00:31:29.593 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:29.593 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.525 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.525 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:30.525 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:30.782 true 00:31:30.782 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:30.782 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.040 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.040 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:31.040 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:31.297 true 00:31:31.297 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:31.297 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:32.669 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:32.669 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:32.927 true 00:31:32.927 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:32.927 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.861 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.861 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:33.861 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:34.119 true 00:31:34.119 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:34.119 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.377 10:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.635 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:34.635 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:34.635 true 00:31:34.635 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:34.635 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.006 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:36.006 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:36.264 true 00:31:36.264 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:36.264 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.194 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.456 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:37.456 10:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:37.456 true 00:31:37.456 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:37.456 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.719 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.976 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:37.976 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:37.976 true 00:31:37.976 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:37.976 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.361 10:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.618 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:39.618 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:39.618 true 00:31:39.618 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:39.618 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:40.550 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.807 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:40.807 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:40.807 true 00:31:40.807 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:40.807 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.065 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.322 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:41.322 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:41.580 true 00:31:41.580 10:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:41.580 10:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.772 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:42.772 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:43.030 true 00:31:43.030 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:43.030 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.963 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.222 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:44.222 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:44.222 true 00:31:44.222 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:44.222 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.479 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.738 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:44.738 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:44.996 true 00:31:44.996 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:44.996 10:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.928 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.236 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:46.236 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:46.236 true 00:31:46.236 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:46.236 10:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.493 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.757 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:46.757 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:46.757 true 00:31:47.015 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:47.015 10:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:47.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:47.953 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.211 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:48.211 10:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:48.468 true 00:31:48.468 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:48.468 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.400 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.400 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:49.400 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:49.658 true 00:31:49.659 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:49.659 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.917 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.174 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:50.174 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:50.174 true 00:31:50.431 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:50.432 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:51.364 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:51.621 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:51.621 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:51.621 true 00:31:51.621 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:51.622 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.879 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.137 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:52.137 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:52.395 true 00:31:52.396 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:52.396 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.769 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:53.769 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:53.769 true 00:31:54.027 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:54.027 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.594 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.870 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:54.870 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:54.870 Initializing NVMe Controllers 00:31:54.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.870 Controller IO queue size 128, less than required. 00:31:54.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:54.870 Controller IO queue size 128, less than required. 00:31:54.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:54.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:54.870 Initialization complete. Launching workers. 00:31:54.870 ======================================================== 00:31:54.870 Latency(us) 00:31:54.870 Device Information : IOPS MiB/s Average min max 00:31:54.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2061.24 1.01 43004.54 2040.63 1018007.14 00:31:54.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18435.65 9.00 6942.82 1672.28 333148.72 00:31:54.870 ======================================================== 00:31:54.870 Total : 20496.89 10.01 10569.31 1672.28 1018007.14 00:31:54.870 00:31:55.127 true 00:31:55.127 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2830869 00:31:55.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2830869) - No such process 00:31:55.127 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2830869 00:31:55.127 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.385 10:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:55.385 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:55.385 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:55.385 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:55.385 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.385 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:55.659 null0 00:31:55.659 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.659 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.659 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:55.990 null1 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:55.991 null2 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:55.991 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:56.255 null3 00:31:56.255 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:56.255 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:56.255 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:56.512 null4 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:56.512 null5 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:56.512 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:56.769 null6 00:31:56.769 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:56.769 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:56.769 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:57.026 null7 00:31:57.026 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:57.026 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:57.026 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2836478 2836480 2836483 2836486 2836490 2836492 2836495 2836497 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.027 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.285 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:57.541 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:57.798 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:58.056 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.313 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:58.314 10:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.314 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:58.572 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.830 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.088 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.089 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:59.346 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.346 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.347 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.605 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.864 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.123 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.382 10:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.382 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:00.640 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:00.640 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.641 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.899 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.157 rmmod nvme_tcp 00:32:01.157 rmmod nvme_fabrics 00:32:01.157 rmmod nvme_keyring 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2830260 ']' 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2830260 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2830260 ']' 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2830260 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830260 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:01.157 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830260' 00:32:01.157 killing process with pid 2830260 00:32:01.158 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2830260 00:32:01.158 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2830260 00:32:01.416 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:01.416 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:01.416 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:01.416 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:01.416 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.416 10:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:03.954 00:32:03.954 real 0m47.668s 00:32:03.954 user 2m55.946s 00:32:03.954 sys 0m19.862s 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:03.954 ************************************ 00:32:03.954 END TEST nvmf_ns_hotplug_stress 00:32:03.954 ************************************ 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:03.954 ************************************ 00:32:03.954 START TEST nvmf_delete_subsystem 00:32:03.954 ************************************ 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:03.954 * Looking for test storage... 00:32:03.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.954 --rc genhtml_branch_coverage=1 00:32:03.954 --rc genhtml_function_coverage=1 00:32:03.954 --rc genhtml_legend=1 00:32:03.954 --rc geninfo_all_blocks=1 00:32:03.954 --rc geninfo_unexecuted_blocks=1 00:32:03.954 00:32:03.954 ' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.954 --rc genhtml_branch_coverage=1 00:32:03.954 --rc genhtml_function_coverage=1 00:32:03.954 --rc genhtml_legend=1 00:32:03.954 --rc geninfo_all_blocks=1 00:32:03.954 --rc geninfo_unexecuted_blocks=1 00:32:03.954 00:32:03.954 ' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.954 --rc genhtml_branch_coverage=1 00:32:03.954 --rc genhtml_function_coverage=1 00:32:03.954 --rc genhtml_legend=1 00:32:03.954 --rc geninfo_all_blocks=1 00:32:03.954 --rc geninfo_unexecuted_blocks=1 00:32:03.954 00:32:03.954 ' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:03.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.954 --rc genhtml_branch_coverage=1 00:32:03.954 --rc genhtml_function_coverage=1 00:32:03.954 --rc genhtml_legend=1 00:32:03.954 --rc geninfo_all_blocks=1 00:32:03.954 --rc geninfo_unexecuted_blocks=1 00:32:03.954 00:32:03.954 ' 00:32:03.954 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:03.955 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:09.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:09.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:09.223 Found net devices under 0000:86:00.0: cvl_0_0 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:09.223 Found net devices under 0000:86:00.1: cvl_0_1 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.223 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.224 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.481 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.481 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:32:09.481 00:32:09.481 --- 10.0.0.2 ping statistics --- 00:32:09.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.481 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:32:09.481 00:32:09.481 --- 10.0.0.1 ping statistics --- 00:32:09.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.481 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.481 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2840742 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2840742 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2840742 ']' 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.738 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.738 [2024-12-09 10:42:47.287530] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.738 [2024-12-09 10:42:47.288572] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:32:09.738 [2024-12-09 10:42:47.288609] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.738 [2024-12-09 10:42:47.367707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:09.738 [2024-12-09 10:42:47.409684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.738 [2024-12-09 10:42:47.409721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.738 [2024-12-09 10:42:47.409728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.738 [2024-12-09 10:42:47.409734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.738 [2024-12-09 10:42:47.409739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.738 [2024-12-09 10:42:47.410942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.738 [2024-12-09 10:42:47.410944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.995 [2024-12-09 10:42:47.479587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.995 [2024-12-09 10:42:47.480137] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.995 [2024-12-09 10:42:47.480315] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:09.995 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 [2024-12-09 10:42:47.547716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 [2024-12-09 10:42:47.576047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 NULL1 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 Delay0 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2840900 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:09.996 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:09.996 [2024-12-09 10:42:47.687124] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:12.522 10:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:12.522 10:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.522 10:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 starting I/O failed: -6 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Write completed with error (sct=0, sc=8) 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Write completed with error (sct=0, sc=8) 00:32:12.522 starting I/O failed: -6 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 Read completed with error (sct=0, sc=8) 00:32:12.522 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 [2024-12-09 10:42:49.832374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b80000c40 is same with the state(6) to be set 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 starting I/O failed: -6 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 [2024-12-09 10:42:49.832937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74a0 is same with the state(6) to be set 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Write completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.523 Read completed with error (sct=0, sc=8) 00:32:12.524 Read completed with error (sct=0, sc=8) 00:32:12.524 Write completed with error (sct=0, sc=8) 00:32:12.524 Write completed with error (sct=0, sc=8) 00:32:13.087 [2024-12-09 10:42:50.782017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d89b0 is same with the state(6) to be set 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 [2024-12-09 10:42:50.830104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b8000d680 is same with the state(6) to be set 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 [2024-12-09 10:42:50.833388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3b8000d020 is same with the state(6) to be set 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 [2024-12-09 10:42:50.834474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d72c0 is same with the state(6) to be set 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Write completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 Read completed with error (sct=0, sc=8) 00:32:13.348 [2024-12-09 10:42:50.835160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d7860 is same with the state(6) to be set 00:32:13.348 Initializing NVMe Controllers 00:32:13.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.348 Controller IO queue size 128, less than required. 00:32:13.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:13.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:13.348 Initialization complete. Launching workers. 00:32:13.348 ======================================================== 00:32:13.348 Latency(us) 00:32:13.348 Device Information : IOPS MiB/s Average min max 00:32:13.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.19 0.08 918272.50 273.31 2000581.20 00:32:13.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.15 0.08 896391.97 346.26 1012821.62 00:32:13.348 ======================================================== 00:32:13.348 Total : 334.34 0.16 907202.38 273.31 2000581.20 00:32:13.348 00:32:13.348 [2024-12-09 10:42:50.835793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d89b0 (9): Bad file descriptor 00:32:13.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:13.348 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.348 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:13.348 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2840900 00:32:13.349 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:13.620 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:13.620 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2840900 00:32:13.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2840900) - No such process 00:32:13.620 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2840900 00:32:13.620 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2840900 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2840900 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:13.878 [2024-12-09 10:42:51.368050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2841453 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:13.878 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:13.878 [2024-12-09 10:42:51.451154] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:14.442 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:14.442 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:14.442 10:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:14.700 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:14.700 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:14.700 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.264 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.264 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:15.264 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:15.827 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:15.828 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:15.828 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:16.391 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:16.391 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:16.391 10:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:16.954 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:16.954 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:16.954 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:16.954 Initializing NVMe Controllers 00:32:16.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.954 Controller IO queue size 128, less than required. 00:32:16.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:16.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:16.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:16.954 Initialization complete. Launching workers. 00:32:16.954 ======================================================== 00:32:16.954 Latency(us) 00:32:16.954 Device Information : IOPS MiB/s Average min max 00:32:16.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002026.89 1000162.48 1040858.08 00:32:16.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004295.70 1000184.87 1041339.20 00:32:16.954 ======================================================== 00:32:16.954 Total : 256.00 0.12 1003161.29 1000162.48 1041339.20 00:32:16.954 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2841453 00:32:17.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2841453) - No such process 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2841453 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.211 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.211 rmmod nvme_tcp 00:32:17.469 rmmod nvme_fabrics 00:32:17.469 rmmod nvme_keyring 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2840742 ']' 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2840742 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2840742 ']' 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2840742 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.469 10:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840742 00:32:17.469 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.469 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.469 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840742' 00:32:17.469 killing process with pid 2840742 00:32:17.469 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2840742 00:32:17.469 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2840742 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.727 10:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.629 00:32:19.629 real 0m16.128s 00:32:19.629 user 0m26.150s 00:32:19.629 sys 0m6.098s 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:19.629 ************************************ 00:32:19.629 END TEST nvmf_delete_subsystem 00:32:19.629 ************************************ 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.629 ************************************ 00:32:19.629 START TEST nvmf_host_management 00:32:19.629 ************************************ 00:32:19.629 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:19.889 * Looking for test storage... 00:32:19.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:19.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.889 --rc genhtml_branch_coverage=1 00:32:19.889 --rc genhtml_function_coverage=1 00:32:19.889 --rc genhtml_legend=1 00:32:19.889 --rc geninfo_all_blocks=1 00:32:19.889 --rc geninfo_unexecuted_blocks=1 00:32:19.889 00:32:19.889 ' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:19.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.889 --rc genhtml_branch_coverage=1 00:32:19.889 --rc genhtml_function_coverage=1 00:32:19.889 --rc genhtml_legend=1 00:32:19.889 --rc geninfo_all_blocks=1 00:32:19.889 --rc geninfo_unexecuted_blocks=1 00:32:19.889 00:32:19.889 ' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:19.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.889 --rc genhtml_branch_coverage=1 00:32:19.889 --rc genhtml_function_coverage=1 00:32:19.889 --rc genhtml_legend=1 00:32:19.889 --rc geninfo_all_blocks=1 00:32:19.889 --rc geninfo_unexecuted_blocks=1 00:32:19.889 00:32:19.889 ' 00:32:19.889 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:19.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.889 --rc genhtml_branch_coverage=1 00:32:19.889 --rc genhtml_function_coverage=1 00:32:19.889 --rc genhtml_legend=1 00:32:19.889 --rc geninfo_all_blocks=1 00:32:19.889 --rc geninfo_unexecuted_blocks=1 00:32:19.889 00:32:19.889 ' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.890 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:26.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:26.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:26.452 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:26.453 Found net devices under 0000:86:00.0: cvl_0_0 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:26.453 Found net devices under 0000:86:00.1: cvl_0_1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:32:26.453 00:32:26.453 --- 10.0.0.2 ping statistics --- 00:32:26.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.453 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:32:26.453 00:32:26.453 --- 10.0.0.1 ping statistics --- 00:32:26.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.453 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2845477 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2845477 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2845477 ']' 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.453 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.453 [2024-12-09 10:43:03.525916] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:26.453 [2024-12-09 10:43:03.526849] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:32:26.453 [2024-12-09 10:43:03.526885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.453 [2024-12-09 10:43:03.604948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:26.453 [2024-12-09 10:43:03.648857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.453 [2024-12-09 10:43:03.648893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.453 [2024-12-09 10:43:03.648900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.454 [2024-12-09 10:43:03.648906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.454 [2024-12-09 10:43:03.648912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.454 [2024-12-09 10:43:03.650471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.454 [2024-12-09 10:43:03.650577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:26.454 [2024-12-09 10:43:03.650697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.454 [2024-12-09 10:43:03.650699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:26.454 [2024-12-09 10:43:03.719760] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:26.454 [2024-12-09 10:43:03.720097] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:26.454 [2024-12-09 10:43:03.720565] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:26.454 [2024-12-09 10:43:03.720686] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:26.454 [2024-12-09 10:43:03.720761] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 [2024-12-09 10:43:03.795334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 Malloc0 00:32:26.454 [2024-12-09 10:43:03.883625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2845701 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2845701 /var/tmp/bdevperf.sock 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2845701 ']' 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:26.454 { 00:32:26.454 "params": { 00:32:26.454 "name": "Nvme$subsystem", 00:32:26.454 "trtype": "$TEST_TRANSPORT", 00:32:26.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.454 "adrfam": "ipv4", 00:32:26.454 "trsvcid": "$NVMF_PORT", 00:32:26.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.454 "hdgst": ${hdgst:-false}, 00:32:26.454 "ddgst": ${ddgst:-false} 00:32:26.454 }, 00:32:26.454 "method": "bdev_nvme_attach_controller" 00:32:26.454 } 00:32:26.454 EOF 00:32:26.454 )") 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:26.454 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:26.454 "params": { 00:32:26.454 "name": "Nvme0", 00:32:26.454 "trtype": "tcp", 00:32:26.454 "traddr": "10.0.0.2", 00:32:26.454 "adrfam": "ipv4", 00:32:26.454 "trsvcid": "4420", 00:32:26.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.454 "hdgst": false, 00:32:26.454 "ddgst": false 00:32:26.454 }, 00:32:26.454 "method": "bdev_nvme_attach_controller" 00:32:26.454 }' 00:32:26.454 [2024-12-09 10:43:03.981094] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:32:26.454 [2024-12-09 10:43:03.981139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845701 ] 00:32:26.454 [2024-12-09 10:43:04.057046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.454 [2024-12-09 10:43:04.098098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.713 Running I/O for 10 seconds... 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:27.282 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.283 [2024-12-09 10:43:04.895125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.895247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4120 is same with the state(6) to be set 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.283 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:27.283 [2024-12-09 10:43:04.904190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.283 [2024-12-09 10:43:04.904220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.283 [2024-12-09 10:43:04.904237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.283 [2024-12-09 10:43:04.904252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.283 [2024-12-09 10:43:04.904265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224120 is same with the state(6) to be set 00:32:27.283 [2024-12-09 10:43:04.904306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.283 [2024-12-09 10:43:04.904637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.283 [2024-12-09 10:43:04.904645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.904991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.904999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.284 [2024-12-09 10:43:04.905235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.284 [2024-12-09 10:43:04.905242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.285 [2024-12-09 10:43:04.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.285 [2024-12-09 10:43:04.905257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.285 [2024-12-09 10:43:04.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.285 [2024-12-09 10:43:04.905272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.285 [2024-12-09 10:43:04.906200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:27.285 task offset: 24576 on job bdev=Nvme0n1 fails 00:32:27.285 00:32:27.285 Latency(us) 00:32:27.285 [2024-12-09T09:43:05.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:27.285 Job: Nvme0n1 ended in about 0.61 seconds with error 00:32:27.285 Verification LBA range: start 0x0 length 0x400 00:32:27.285 Nvme0n1 : 0.61 1989.56 124.35 104.71 0.00 29941.47 1607.19 26713.72 00:32:27.285 [2024-12-09T09:43:05.009Z] =================================================================================================================== 00:32:27.285 [2024-12-09T09:43:05.009Z] Total : 1989.56 124.35 104.71 0.00 29941.47 1607.19 26713.72 00:32:27.285 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.285 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:27.285 [2024-12-09 10:43:04.908547] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:27.285 [2024-12-09 10:43:04.908568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1224120 (9): Bad file descriptor 00:32:27.285 [2024-12-09 10:43:04.951869] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2845701 00:32:28.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2845701) - No such process 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.220 { 00:32:28.220 "params": { 00:32:28.220 "name": "Nvme$subsystem", 00:32:28.220 "trtype": "$TEST_TRANSPORT", 00:32:28.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.220 "adrfam": "ipv4", 00:32:28.220 "trsvcid": "$NVMF_PORT", 00:32:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.220 "hdgst": ${hdgst:-false}, 00:32:28.220 "ddgst": ${ddgst:-false} 00:32:28.220 }, 00:32:28.220 "method": "bdev_nvme_attach_controller" 00:32:28.220 } 00:32:28.220 EOF 00:32:28.220 )") 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:28.220 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:28.220 "params": { 00:32:28.220 "name": "Nvme0", 00:32:28.220 "trtype": "tcp", 00:32:28.220 "traddr": "10.0.0.2", 00:32:28.220 "adrfam": "ipv4", 00:32:28.220 "trsvcid": "4420", 00:32:28.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.220 "hdgst": false, 00:32:28.220 "ddgst": false 00:32:28.220 }, 00:32:28.220 "method": "bdev_nvme_attach_controller" 00:32:28.220 }' 00:32:28.478 [2024-12-09 10:43:05.964188] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:32:28.478 [2024-12-09 10:43:05.964240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845954 ] 00:32:28.478 [2024-12-09 10:43:06.037930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.478 [2024-12-09 10:43:06.076639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.736 Running I/O for 1 seconds... 00:32:29.670 2048.00 IOPS, 128.00 MiB/s 00:32:29.670 Latency(us) 00:32:29.670 [2024-12-09T09:43:07.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:29.670 Verification LBA range: start 0x0 length 0x400 00:32:29.670 Nvme0n1 : 1.02 2078.70 129.92 0.00 0.00 30307.63 5960.66 26588.89 00:32:29.670 [2024-12-09T09:43:07.394Z] =================================================================================================================== 00:32:29.670 [2024-12-09T09:43:07.394Z] Total : 2078.70 129.92 0.00 0.00 30307.63 5960.66 26588.89 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.929 rmmod nvme_tcp 00:32:29.929 rmmod nvme_fabrics 00:32:29.929 rmmod nvme_keyring 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2845477 ']' 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2845477 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2845477 ']' 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2845477 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845477 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845477' 00:32:29.929 killing process with pid 2845477 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2845477 00:32:29.929 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2845477 00:32:30.188 [2024-12-09 10:43:07.738266] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.188 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.189 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.189 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.189 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:32.725 00:32:32.725 real 0m12.494s 00:32:32.725 user 0m18.744s 00:32:32.725 sys 0m6.341s 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.725 ************************************ 00:32:32.725 END TEST nvmf_host_management 00:32:32.725 ************************************ 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:32.725 ************************************ 00:32:32.725 START TEST nvmf_lvol 00:32:32.725 ************************************ 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:32.725 * Looking for test storage... 00:32:32.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.725 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.725 --rc genhtml_branch_coverage=1 00:32:32.725 --rc genhtml_function_coverage=1 00:32:32.725 --rc genhtml_legend=1 00:32:32.725 --rc geninfo_all_blocks=1 00:32:32.725 --rc geninfo_unexecuted_blocks=1 00:32:32.725 00:32:32.725 ' 00:32:32.725 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.726 --rc genhtml_branch_coverage=1 00:32:32.726 --rc genhtml_function_coverage=1 00:32:32.726 --rc genhtml_legend=1 00:32:32.726 --rc geninfo_all_blocks=1 00:32:32.726 --rc geninfo_unexecuted_blocks=1 00:32:32.726 00:32:32.726 ' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.726 --rc genhtml_branch_coverage=1 00:32:32.726 --rc genhtml_function_coverage=1 00:32:32.726 --rc genhtml_legend=1 00:32:32.726 --rc geninfo_all_blocks=1 00:32:32.726 --rc geninfo_unexecuted_blocks=1 00:32:32.726 00:32:32.726 ' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.726 --rc genhtml_branch_coverage=1 00:32:32.726 --rc genhtml_function_coverage=1 00:32:32.726 --rc genhtml_legend=1 00:32:32.726 --rc geninfo_all_blocks=1 00:32:32.726 --rc geninfo_unexecuted_blocks=1 00:32:32.726 00:32:32.726 ' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.726 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:38.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:38.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.035 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:38.036 Found net devices under 0000:86:00.0: cvl_0_0 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:38.036 Found net devices under 0000:86:00.1: cvl_0_1 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.036 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:32:38.295 00:32:38.295 --- 10.0.0.2 ping statistics --- 00:32:38.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.295 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:32:38.295 00:32:38.295 --- 10.0.0.1 ping statistics --- 00:32:38.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.295 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2849706 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2849706 00:32:38.295 10:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2849706 ']' 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.295 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:38.553 [2024-12-09 10:43:16.049801] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:38.553 [2024-12-09 10:43:16.050723] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:32:38.553 [2024-12-09 10:43:16.050758] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.553 [2024-12-09 10:43:16.130075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:38.554 [2024-12-09 10:43:16.169925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.554 [2024-12-09 10:43:16.169961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.554 [2024-12-09 10:43:16.169968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.554 [2024-12-09 10:43:16.169973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.554 [2024-12-09 10:43:16.169978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.554 [2024-12-09 10:43:16.171236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.554 [2024-12-09 10:43:16.171344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.554 [2024-12-09 10:43:16.171345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.554 [2024-12-09 10:43:16.238515] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:38.554 [2024-12-09 10:43:16.239187] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:38.554 [2024-12-09 10:43:16.239378] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:38.554 [2024-12-09 10:43:16.239497] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:38.554 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.554 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:38.554 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:38.554 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.554 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:38.812 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.812 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:38.812 [2024-12-09 10:43:16.484109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.812 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:39.071 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:39.071 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:39.329 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:39.329 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:39.588 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:39.847 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=32c93ecb-5484-481a-832c-14d3d6e80d84 00:32:39.848 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32c93ecb-5484-481a-832c-14d3d6e80d84 lvol 20 00:32:39.848 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=51584d1f-92f0-41fe-9621-cdd2b87f23f3 00:32:39.848 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:40.107 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 51584d1f-92f0-41fe-9621-cdd2b87f23f3 00:32:40.365 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.624 [2024-12-09 10:43:18.104001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.624 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:40.624 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2850140 00:32:40.624 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:40.624 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:41.998 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 51584d1f-92f0-41fe-9621-cdd2b87f23f3 MY_SNAPSHOT 00:32:41.998 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5291eee5-252c-47fe-8dac-9aecc7cfabc4 00:32:41.998 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 51584d1f-92f0-41fe-9621-cdd2b87f23f3 30 00:32:42.255 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5291eee5-252c-47fe-8dac-9aecc7cfabc4 MY_CLONE 00:32:42.513 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=40488031-3610-43dd-8a0a-26cee8f2468e 00:32:42.513 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 40488031-3610-43dd-8a0a-26cee8f2468e 00:34:11.802 Resuming build at Mon Dec 09 09:44:49 UTC 2024 after Jenkins restart 00:34:15.808 Waiting for reconnection of WFP6 before proceeding with build 00:34:16.100 Timeout set to expire in 30 min 00:34:16.124 Ready to run at Mon Dec 09 09:44:53 UTC 2024 00:34:16.453 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2850140 00:34:16.454 Initializing NVMe Controllers 00:34:16.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:16.454 Controller IO queue size 128, less than required. 00:34:16.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:16.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:16.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:16.455 Initialization complete. Launching workers. 00:34:16.456 ======================================================== 00:34:16.456 Latency(us) 00:34:16.456 Device Information : IOPS MiB/s Average min max 00:34:16.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12288.60 48.00 10420.01 1529.75 44934.38 00:34:16.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12439.00 48.59 10289.03 445.88 81570.52 00:34:16.457 ======================================================== 00:34:16.458 Total : 24727.60 96.59 10354.12 445.88 81570.52 00:34:16.458 00:34:16.458 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.459 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 51584d1f-92f0-41fe-9621-cdd2b87f23f3 00:34:16.460 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32c93ecb-5484-481a-832c-14d3d6e80d84 00:34:16.460 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:16.460 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:16.460 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:16.461 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.461 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:16.461 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.462 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:16.462 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.462 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.463 rmmod nvme_tcp 00:34:16.463 rmmod nvme_fabrics 00:34:16.463 rmmod nvme_keyring 00:34:16.463 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.464 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:16.464 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:16.464 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2849706 ']' 00:34:16.465 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2849706 00:34:16.465 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2849706 ']' 00:34:16.466 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2849706 00:34:16.466 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:16.467 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.467 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2849706 00:34:16.468 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.468 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.469 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2849706' 00:34:16.469 killing process with pid 2849706 00:34:16.469 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2849706 00:34:16.470 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2849706 00:34:16.470 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.471 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:16.471 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:16.471 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:16.472 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:16.472 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:16.472 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:16.473 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:16.473 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:16.474 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.474 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.474 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.475 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.475 00:34:16.475 real 0m21.809s 00:34:16.475 user 0m55.783s 00:34:16.475 sys 0m9.763s 00:34:16.475 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.476 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:16.476 ************************************ 00:34:16.476 END TEST nvmf_lvol 00:34:16.476 ************************************ 00:34:16.477 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:16.477 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:16.477 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.478 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:16.478 ************************************ 00:34:16.478 START TEST nvmf_lvs_grow 00:34:16.478 ************************************ 00:34:16.478 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:16.479 * Looking for test storage... 00:34:16.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.479 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:16.480 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:34:16.480 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:16.481 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:16.481 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.482 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.482 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.482 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.483 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.483 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.484 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.484 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.485 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.485 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.485 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.486 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:16.486 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:16.487 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.487 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.488 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:16.488 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:16.489 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.489 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:16.489 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.490 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:16.490 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:16.491 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.491 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:16.491 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.492 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.492 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.493 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:16.494 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.494 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:16.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.494 --rc genhtml_branch_coverage=1 00:34:16.495 --rc genhtml_function_coverage=1 00:34:16.495 --rc genhtml_legend=1 00:34:16.495 --rc geninfo_all_blocks=1 00:34:16.495 --rc geninfo_unexecuted_blocks=1 00:34:16.495 00:34:16.495 ' 00:34:16.496 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.496 --rc genhtml_branch_coverage=1 00:34:16.496 --rc genhtml_function_coverage=1 00:34:16.496 --rc genhtml_legend=1 00:34:16.496 --rc geninfo_all_blocks=1 00:34:16.497 --rc geninfo_unexecuted_blocks=1 00:34:16.497 00:34:16.497 ' 00:34:16.497 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:16.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.498 --rc genhtml_branch_coverage=1 00:34:16.498 --rc genhtml_function_coverage=1 00:34:16.498 --rc genhtml_legend=1 00:34:16.498 --rc geninfo_all_blocks=1 00:34:16.498 --rc geninfo_unexecuted_blocks=1 00:34:16.498 00:34:16.498 ' 00:34:16.499 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:16.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.499 --rc genhtml_branch_coverage=1 00:34:16.499 --rc genhtml_function_coverage=1 00:34:16.499 --rc genhtml_legend=1 00:34:16.499 --rc geninfo_all_blocks=1 00:34:16.500 --rc geninfo_unexecuted_blocks=1 00:34:16.500 00:34:16.500 ' 00:34:16.500 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.501 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:16.501 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.502 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.502 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.503 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.503 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.504 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.504 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.505 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.505 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.506 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.506 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:16.507 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:16.507 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.508 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.508 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.510 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.511 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.511 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.512 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.512 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.513 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.516 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.519 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.521 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.522 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:16.524 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.525 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:16.525 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.526 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.526 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.527 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.527 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.528 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:16.528 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:16.529 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.529 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.529 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.530 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.531 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:16.531 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:16.531 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.532 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.532 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.533 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.533 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.533 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.534 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.534 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.535 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.535 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.536 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.536 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.536 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.537 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.537 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.538 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.538 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.538 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.539 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.539 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.540 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.540 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:16.541 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.541 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:16.542 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.542 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:16.543 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.543 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.544 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.545 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.545 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.546 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.547 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.547 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.548 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.549 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.549 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.550 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.551 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.551 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.552 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.552 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.553 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.553 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.554 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.554 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.555 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:16.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:16.556 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.556 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.557 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.557 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.558 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.558 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.559 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:16.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:16.560 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.560 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.561 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.561 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.562 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.562 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.563 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.563 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.564 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.564 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.565 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.565 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.566 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.566 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.567 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.568 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:16.568 Found net devices under 0000:86:00.0: cvl_0_0 00:34:16.568 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.569 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.570 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.570 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.571 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.571 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.572 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.572 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.573 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:16.573 Found net devices under 0000:86:00.1: cvl_0_1 00:34:16.574 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.574 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.574 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.575 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.575 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.576 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.576 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.577 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.577 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.578 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.578 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.579 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.579 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.580 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.580 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.581 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.582 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.582 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.583 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.583 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.584 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.584 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.585 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.586 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.586 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.587 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.588 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.589 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.589 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:34:16.590 00:34:16.590 --- 10.0.0.2 ping statistics --- 00:34:16.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.590 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:34:16.591 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:34:16.591 00:34:16.592 --- 10.0.0.1 ping statistics --- 00:34:16.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.592 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:16.593 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.593 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:16.594 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.594 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.595 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.595 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.596 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.596 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.597 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.597 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:16.598 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.598 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.599 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.599 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2855327 00:34:16.600 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:16.601 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2855327 00:34:16.602 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2855327 ']' 00:34:16.602 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.603 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.604 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.605 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.605 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.606 [2024-12-09 10:43:37.984602] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.606 [2024-12-09 10:43:37.985498] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:16.608 [2024-12-09 10:43:37.985532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.608 [2024-12-09 10:43:38.066487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.609 [2024-12-09 10:43:38.106540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.609 [2024-12-09 10:43:38.106574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.610 [2024-12-09 10:43:38.106581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.610 [2024-12-09 10:43:38.106587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.611 [2024-12-09 10:43:38.106591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.611 [2024-12-09 10:43:38.107133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.612 [2024-12-09 10:43:38.174084] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.613 [2024-12-09 10:43:38.174290] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.613 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.614 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:16.615 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.615 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.616 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.616 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.617 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:16.618 [2024-12-09 10:43:39.019803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.618 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:16.619 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:16.620 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.620 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.620 ************************************ 00:34:16.620 START TEST lvs_grow_clean 00:34:16.621 ************************************ 00:34:16.621 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:16.622 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:16.622 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:16.623 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:16.624 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:16.624 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:16.625 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:16.626 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.627 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.632 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:16.632 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:16.634 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:16.634 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.636 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.636 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:16.637 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:16.638 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:16.639 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 lvol 150 00:34:16.640 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 00:34:16.641 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.642 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:16.643 [2024-12-09 10:43:40.067525] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:16.644 [2024-12-09 10:43:40.067663] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:16.644 true 00:34:16.645 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.646 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:16.647 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:16.648 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:16.649 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 00:34:16.651 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.651 [2024-12-09 10:43:40.832014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.653 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.653 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2855834 00:34:16.655 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:16.656 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.657 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2855834 /var/tmp/bdevperf.sock 00:34:16.658 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2855834 ']' 00:34:16.658 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.659 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.660 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.662 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.662 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 [2024-12-09 10:43:41.063828] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:16.665 [2024-12-09 10:43:41.063880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855834 ] 00:34:16.665 [2024-12-09 10:43:41.138071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.666 [2024-12-09 10:43:41.179683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.666 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.667 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:16.669 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:16.669 Nvme0n1 00:34:16.670 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:16.765 [ 00:34:16.765 { 00:34:16.765 "name": "Nvme0n1", 00:34:16.765 "aliases": [ 00:34:16.765 "fbbcfd9a-32d7-4940-99f2-1c29ab6920c7" 00:34:16.765 ], 00:34:16.765 "product_name": "NVMe disk", 00:34:16.765 "block_size": 4096, 00:34:16.765 "num_blocks": 38912, 00:34:16.765 "uuid": "fbbcfd9a-32d7-4940-99f2-1c29ab6920c7", 00:34:16.765 "numa_id": 1, 00:34:16.765 "assigned_rate_limits": { 00:34:16.765 "rw_ios_per_sec": 0, 00:34:16.765 "rw_mbytes_per_sec": 0, 00:34:16.765 "r_mbytes_per_sec": 0, 00:34:16.765 "w_mbytes_per_sec": 0 00:34:16.765 }, 00:34:16.765 "claimed": false, 00:34:16.765 "zoned": false, 00:34:16.765 "supported_io_types": { 00:34:16.765 "read": true, 00:34:16.765 "write": true, 00:34:16.766 "unmap": true, 00:34:16.766 "flush": true, 00:34:16.766 "reset": true, 00:34:16.766 "nvme_admin": true, 00:34:16.766 "nvme_io": true, 00:34:16.766 "nvme_io_md": false, 00:34:16.766 "write_zeroes": true, 00:34:16.766 "zcopy": false, 00:34:16.766 "get_zone_info": false, 00:34:16.766 "zone_management": false, 00:34:16.766 "zone_append": false, 00:34:16.766 "compare": true, 00:34:16.766 "compare_and_write": true, 00:34:16.766 "abort": true, 00:34:16.766 "seek_hole": false, 00:34:16.766 "seek_data": false, 00:34:16.766 "copy": true, 00:34:16.767 "nvme_iov_md": false 00:34:16.767 }, 00:34:16.767 "memory_domains": [ 00:34:16.767 { 00:34:16.767 "dma_device_id": "system", 00:34:16.767 "dma_device_type": 1 00:34:16.767 } 00:34:16.767 ], 00:34:16.767 "driver_specific": { 00:34:16.767 "nvme": [ 00:34:16.767 { 00:34:16.767 "trid": { 00:34:16.767 "trtype": "TCP", 00:34:16.768 "adrfam": "IPv4", 00:34:16.768 "traddr": "10.0.0.2", 00:34:16.768 "trsvcid": "4420", 00:34:16.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:16.768 }, 00:34:16.768 "ctrlr_data": { 00:34:16.768 "cntlid": 1, 00:34:16.769 "vendor_id": "0x8086", 00:34:16.769 "model_number": "SPDK bdev Controller", 00:34:16.769 "serial_number": "SPDK0", 00:34:16.769 "firmware_revision": "25.01", 00:34:16.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.769 "oacs": { 00:34:16.769 "security": 0, 00:34:16.770 "format": 0, 00:34:16.770 "firmware": 0, 00:34:16.770 "ns_manage": 0 00:34:16.770 }, 00:34:16.770 "multi_ctrlr": true, 00:34:16.770 "ana_reporting": false 00:34:16.770 }, 00:34:16.770 "vs": { 00:34:16.770 "nvme_version": "1.3" 00:34:16.771 }, 00:34:16.771 "ns_data": { 00:34:16.771 "id": 1, 00:34:16.771 "can_share": true 00:34:16.771 } 00:34:16.771 } 00:34:16.771 ], 00:34:16.771 "mp_policy": "active_passive" 00:34:16.771 } 00:34:16.771 } 00:34:16.771 ] 00:34:16.772 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2856062 00:34:16.773 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:16.773 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:16.773 Running I/O for 10 seconds... 00:34:16.774 Latency(us) 00:34:16.774 [2024-12-09T09:44:54.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.775 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:16.775 [2024-12-09T09:44:54.500Z] =================================================================================================================== 00:34:16.776 [2024-12-09T09:44:54.500Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:16.776 00:34:16.777 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.777 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:34:16.778 [2024-12-09T09:44:54.502Z] =================================================================================================================== 00:34:16.778 [2024-12-09T09:44:54.502Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:34:16.778 00:34:16.778 true 00:34:16.779 10:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:16.780 10:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.780 10:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:16.781 10:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:16.782 10:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2856062 00:34:16.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.782 Nvme0n1 : 3.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:16.782 [2024-12-09T09:44:54.507Z] =================================================================================================================== 00:34:16.783 [2024-12-09T09:44:54.507Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:16.783 00:34:16.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.784 Nvme0n1 : 4.00 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:34:16.784 [2024-12-09T09:44:54.509Z] =================================================================================================================== 00:34:16.785 [2024-12-09T09:44:54.509Z] Total : 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:34:16.785 00:34:16.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.786 Nvme0n1 : 5.00 23533.20 91.93 0.00 0.00 0.00 0.00 0.00 00:34:16.786 [2024-12-09T09:44:54.510Z] =================================================================================================================== 00:34:16.786 [2024-12-09T09:44:54.511Z] Total : 23533.20 91.93 0.00 0.00 0.00 0.00 0.00 00:34:16.787 00:34:16.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.788 Nvme0n1 : 6.00 23585.33 92.13 0.00 0.00 0.00 0.00 0.00 00:34:16.788 [2024-12-09T09:44:54.512Z] =================================================================================================================== 00:34:16.788 [2024-12-09T09:44:54.513Z] Total : 23585.33 92.13 0.00 0.00 0.00 0.00 0.00 00:34:16.789 00:34:16.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.789 Nvme0n1 : 7.00 23626.86 92.29 0.00 0.00 0.00 0.00 0.00 00:34:16.789 [2024-12-09T09:44:54.514Z] =================================================================================================================== 00:34:16.790 [2024-12-09T09:44:54.514Z] Total : 23626.86 92.29 0.00 0.00 0.00 0.00 0.00 00:34:16.790 00:34:16.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.791 Nvme0n1 : 8.00 23666.00 92.45 0.00 0.00 0.00 0.00 0.00 00:34:16.791 [2024-12-09T09:44:54.516Z] =================================================================================================================== 00:34:16.792 [2024-12-09T09:44:54.516Z] Total : 23666.00 92.45 0.00 0.00 0.00 0.00 0.00 00:34:16.792 00:34:16.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.793 Nvme0n1 : 9.00 23631.22 92.31 0.00 0.00 0.00 0.00 0.00 00:34:16.793 [2024-12-09T09:44:54.517Z] =================================================================================================================== 00:34:16.793 [2024-12-09T09:44:54.518Z] Total : 23631.22 92.31 0.00 0.00 0.00 0.00 0.00 00:34:16.794 00:34:16.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.795 Nvme0n1 : 10.00 23655.70 92.41 0.00 0.00 0.00 0.00 0.00 00:34:16.795 [2024-12-09T09:44:54.521Z] =================================================================================================================== 00:34:16.797 [2024-12-09T09:44:54.521Z] Total : 23655.70 92.41 0.00 0.00 0.00 0.00 0.00 00:34:16.797 00:34:16.797 00:34:16.798 Latency(us) 00:34:16.798 [2024-12-09T09:44:54.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.799 Nvme0n1 : 10.00 23661.56 92.43 0.00 0.00 5406.70 3136.37 26838.55 00:34:16.799 [2024-12-09T09:44:54.523Z] =================================================================================================================== 00:34:16.799 [2024-12-09T09:44:54.524Z] Total : 23661.56 92.43 0.00 0.00 5406.70 3136.37 26838.55 00:34:16.800 { 00:34:16.800 "results": [ 00:34:16.800 { 00:34:16.800 "job": "Nvme0n1", 00:34:16.800 "core_mask": "0x2", 00:34:16.800 "workload": "randwrite", 00:34:16.800 "status": "finished", 00:34:16.800 "queue_depth": 128, 00:34:16.800 "io_size": 4096, 00:34:16.800 "runtime": 10.002934, 00:34:16.801 "iops": 23661.557698971123, 00:34:16.801 "mibps": 92.42795976160595, 00:34:16.801 "io_failed": 0, 00:34:16.801 "io_timeout": 0, 00:34:16.801 "avg_latency_us": 5406.696332392762, 00:34:16.801 "min_latency_us": 3136.365714285714, 00:34:16.801 "max_latency_us": 26838.55238095238 00:34:16.801 } 00:34:16.801 ], 00:34:16.801 "core_count": 1 00:34:16.801 } 00:34:16.802 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2855834 00:34:16.802 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2855834 ']' 00:34:16.803 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2855834 00:34:16.803 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:16.804 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.804 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2855834 00:34:16.805 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.805 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.806 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2855834' 00:34:16.806 killing process with pid 2855834 00:34:16.807 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2855834 00:34:16.807 Received shutdown signal, test time was about 10.000000 seconds 00:34:16.807 00:34:16.807 Latency(us) 00:34:16.807 [2024-12-09T09:44:54.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.808 [2024-12-09T09:44:54.532Z] =================================================================================================================== 00:34:16.808 [2024-12-09T09:44:54.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:16.821 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2855834 00:34:16.822 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.823 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.824 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.824 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:16.824 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:16.825 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:16.825 10:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:16.826 [2024-12-09 10:43:52.959588] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:16.827 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.827 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:16.828 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.829 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.829 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.830 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.831 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.831 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.832 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.833 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.833 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:16.834 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.834 request: 00:34:16.834 { 00:34:16.834 "uuid": "36fd830d-a75e-412e-ad9c-7ca7b0efc3f4", 00:34:16.835 "method": "bdev_lvol_get_lvstores", 00:34:16.835 "req_id": 1 00:34:16.835 } 00:34:16.835 Got JSON-RPC error response 00:34:16.835 response: 00:34:16.835 { 00:34:16.835 "code": -19, 00:34:16.835 "message": "No such device" 00:34:16.835 } 00:34:16.835 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:16.836 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.836 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.837 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.838 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:16.838 aio_bdev 00:34:16.839 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 00:34:16.839 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 00:34:16.840 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:16.840 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:16.841 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:16.841 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:16.842 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:16.843 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 -t 2000 00:34:16.843 [ 00:34:16.843 { 00:34:16.843 "name": "fbbcfd9a-32d7-4940-99f2-1c29ab6920c7", 00:34:16.843 "aliases": [ 00:34:16.843 "lvs/lvol" 00:34:16.843 ], 00:34:16.843 "product_name": "Logical Volume", 00:34:16.843 "block_size": 4096, 00:34:16.843 "num_blocks": 38912, 00:34:16.844 "uuid": "fbbcfd9a-32d7-4940-99f2-1c29ab6920c7", 00:34:16.844 "assigned_rate_limits": { 00:34:16.844 "rw_ios_per_sec": 0, 00:34:16.844 "rw_mbytes_per_sec": 0, 00:34:16.844 "r_mbytes_per_sec": 0, 00:34:16.844 "w_mbytes_per_sec": 0 00:34:16.844 }, 00:34:16.844 "claimed": false, 00:34:16.844 "zoned": false, 00:34:16.845 "supported_io_types": { 00:34:16.845 "read": true, 00:34:16.845 "write": true, 00:34:16.846 "unmap": true, 00:34:16.846 "flush": false, 00:34:16.846 "reset": true, 00:34:16.846 "nvme_admin": false, 00:34:16.846 "nvme_io": false, 00:34:16.846 "nvme_io_md": false, 00:34:16.846 "write_zeroes": true, 00:34:16.847 "zcopy": false, 00:34:16.847 "get_zone_info": false, 00:34:16.847 "zone_management": false, 00:34:16.847 "zone_append": false, 00:34:16.847 "compare": false, 00:34:16.847 "compare_and_write": false, 00:34:16.847 "abort": false, 00:34:16.847 "seek_hole": true, 00:34:16.847 "seek_data": true, 00:34:16.847 "copy": false, 00:34:16.847 "nvme_iov_md": false 00:34:16.847 }, 00:34:16.847 "driver_specific": { 00:34:16.847 "lvol": { 00:34:16.848 "lvol_store_uuid": "36fd830d-a75e-412e-ad9c-7ca7b0efc3f4", 00:34:16.848 "base_bdev": "aio_bdev", 00:34:16.848 "thin_provision": false, 00:34:16.848 "num_allocated_clusters": 38, 00:34:16.851 "snapshot": false, 00:34:16.852 "clone": false, 00:34:16.853 "esnap_clone": false 00:34:16.853 } 00:34:16.853 } 00:34:16.853 } 00:34:16.853 ] 00:34:16.853 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:16.853 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:16.853 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.853 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:16.854 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.854 10:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:16.854 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:16.854 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fbbcfd9a-32d7-4940-99f2-1c29ab6920c7 00:34:16.855 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36fd830d-a75e-412e-ad9c-7ca7b0efc3f4 00:34:16.855 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:16.855 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.855 00:34:16.855 real 0m15.698s 00:34:16.855 user 0m15.197s 00:34:16.855 sys 0m1.513s 00:34:16.855 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.856 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:16.856 ************************************ 00:34:16.856 END TEST lvs_grow_clean 00:34:16.856 ************************************ 00:34:16.856 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:16.856 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:16.856 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.857 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.857 ************************************ 00:34:16.857 START TEST lvs_grow_dirty 00:34:16.857 ************************************ 00:34:16.857 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:16.857 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:16.858 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:16.858 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:16.858 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:16.858 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:16.859 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:16.859 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.861 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.861 10:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:16.861 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:16.862 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:16.862 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.863 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.863 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:16.863 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:16.863 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:16.864 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 lvol 150 00:34:16.864 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.864 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.865 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:16.865 [2024-12-09 10:43:55.851524] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:16.865 [2024-12-09 10:43:55.851668] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:16.865 true 00:34:16.866 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.866 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:16.866 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:16.867 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:16.867 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.868 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.868 [2024-12-09 10:43:56.619967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.868 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.869 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2858415 00:34:16.869 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.869 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:16.870 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2858415 /var/tmp/bdevperf.sock 00:34:16.870 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2858415 ']' 00:34:16.870 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.870 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.871 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.871 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.871 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.871 [2024-12-09 10:43:56.879258] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:16.872 [2024-12-09 10:43:56.879308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858415 ] 00:34:16.872 [2024-12-09 10:43:56.955009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.876 [2024-12-09 10:43:56.996842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.877 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.877 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:16.877 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:16.877 Nvme0n1 00:34:16.878 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:16.878 [ 00:34:16.878 { 00:34:16.878 "name": "Nvme0n1", 00:34:16.878 "aliases": [ 00:34:16.878 "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5" 00:34:16.878 ], 00:34:16.878 "product_name": "NVMe disk", 00:34:16.878 "block_size": 4096, 00:34:16.878 "num_blocks": 38912, 00:34:16.878 "uuid": "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5", 00:34:16.878 "numa_id": 1, 00:34:16.878 "assigned_rate_limits": { 00:34:16.878 "rw_ios_per_sec": 0, 00:34:16.878 "rw_mbytes_per_sec": 0, 00:34:16.878 "r_mbytes_per_sec": 0, 00:34:16.878 "w_mbytes_per_sec": 0 00:34:16.878 }, 00:34:16.878 "claimed": false, 00:34:16.878 "zoned": false, 00:34:16.878 "supported_io_types": { 00:34:16.878 "read": true, 00:34:16.878 "write": true, 00:34:16.878 "unmap": true, 00:34:16.878 "flush": true, 00:34:16.878 "reset": true, 00:34:16.878 "nvme_admin": true, 00:34:16.878 "nvme_io": true, 00:34:16.878 "nvme_io_md": false, 00:34:16.879 "write_zeroes": true, 00:34:16.879 "zcopy": false, 00:34:16.879 "get_zone_info": false, 00:34:16.879 "zone_management": false, 00:34:16.879 "zone_append": false, 00:34:16.879 "compare": true, 00:34:16.879 "compare_and_write": true, 00:34:16.879 "abort": true, 00:34:16.879 "seek_hole": false, 00:34:16.879 "seek_data": false, 00:34:16.879 "copy": true, 00:34:16.879 "nvme_iov_md": false 00:34:16.879 }, 00:34:16.879 "memory_domains": [ 00:34:16.879 { 00:34:16.879 "dma_device_id": "system", 00:34:16.879 "dma_device_type": 1 00:34:16.879 } 00:34:16.879 ], 00:34:16.879 "driver_specific": { 00:34:16.879 "nvme": [ 00:34:16.879 { 00:34:16.879 "trid": { 00:34:16.879 "trtype": "TCP", 00:34:16.879 "adrfam": "IPv4", 00:34:16.879 "traddr": "10.0.0.2", 00:34:16.879 "trsvcid": "4420", 00:34:16.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:16.879 }, 00:34:16.879 "ctrlr_data": { 00:34:16.879 "cntlid": 1, 00:34:16.879 "vendor_id": "0x8086", 00:34:16.880 "model_number": "SPDK bdev Controller", 00:34:16.880 "serial_number": "SPDK0", 00:34:16.880 "firmware_revision": "25.01", 00:34:16.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.880 "oacs": { 00:34:16.880 "security": 0, 00:34:16.880 "format": 0, 00:34:16.880 "firmware": 0, 00:34:16.880 "ns_manage": 0 00:34:16.880 }, 00:34:16.880 "multi_ctrlr": true, 00:34:16.880 "ana_reporting": false 00:34:16.880 }, 00:34:16.880 "vs": { 00:34:16.880 "nvme_version": "1.3" 00:34:16.880 }, 00:34:16.880 "ns_data": { 00:34:16.880 "id": 1, 00:34:16.880 "can_share": true 00:34:16.880 } 00:34:16.880 } 00:34:16.881 ], 00:34:16.881 "mp_policy": "active_passive" 00:34:16.881 } 00:34:16.881 } 00:34:16.881 ] 00:34:16.881 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2858568 00:34:16.881 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:16.881 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:16.881 Running I/O for 10 seconds... 00:34:16.882 Latency(us) 00:34:16.882 [2024-12-09T09:44:54.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.882 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:16.882 [2024-12-09T09:44:54.606Z] =================================================================================================================== 00:34:16.882 [2024-12-09T09:44:54.607Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:16.883 00:34:16.883 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.883 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:34:16.883 [2024-12-09T09:44:54.607Z] =================================================================================================================== 00:34:16.883 [2024-12-09T09:44:54.608Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:34:16.884 00:34:16.884 true 00:34:16.884 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.884 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:16.884 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:16.885 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:16.885 10:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2858568 00:34:16.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.885 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:16.885 [2024-12-09T09:44:54.609Z] =================================================================================================================== 00:34:16.885 [2024-12-09T09:44:54.610Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:16.886 00:34:16.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.886 Nvme0n1 : 4.00 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:34:16.886 [2024-12-09T09:44:54.610Z] =================================================================================================================== 00:34:16.886 [2024-12-09T09:44:54.610Z] Total : 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:34:16.886 00:34:16.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.887 Nvme0n1 : 5.00 23253.80 90.84 0.00 0.00 0.00 0.00 0.00 00:34:16.887 [2024-12-09T09:44:54.611Z] =================================================================================================================== 00:34:16.887 [2024-12-09T09:44:54.611Z] Total : 23253.80 90.84 0.00 0.00 0.00 0.00 0.00 00:34:16.887 00:34:16.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.887 Nvme0n1 : 6.00 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:34:16.887 [2024-12-09T09:44:54.612Z] =================================================================================================================== 00:34:16.888 [2024-12-09T09:44:54.612Z] Total : 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:34:16.888 00:34:16.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.888 Nvme0n1 : 7.00 23363.86 91.27 0.00 0.00 0.00 0.00 0.00 00:34:16.888 [2024-12-09T09:44:54.612Z] =================================================================================================================== 00:34:16.888 [2024-12-09T09:44:54.612Z] Total : 23363.86 91.27 0.00 0.00 0.00 0.00 0.00 00:34:16.888 00:34:16.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.889 Nvme0n1 : 8.00 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:34:16.889 [2024-12-09T09:44:54.613Z] =================================================================================================================== 00:34:16.889 [2024-12-09T09:44:54.613Z] Total : 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:34:16.889 00:34:16.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.889 Nvme0n1 : 9.00 23481.44 91.72 0.00 0.00 0.00 0.00 0.00 00:34:16.889 [2024-12-09T09:44:54.614Z] =================================================================================================================== 00:34:16.890 [2024-12-09T09:44:54.614Z] Total : 23481.44 91.72 0.00 0.00 0.00 0.00 0.00 00:34:16.890 00:34:16.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.890 Nvme0n1 : 10.00 23527.30 91.90 0.00 0.00 0.00 0.00 0.00 00:34:16.890 [2024-12-09T09:44:54.614Z] =================================================================================================================== 00:34:16.890 [2024-12-09T09:44:54.614Z] Total : 23527.30 91.90 0.00 0.00 0.00 0.00 0.00 00:34:16.890 00:34:16.890 00:34:16.891 Latency(us) 00:34:16.891 [2024-12-09T09:44:54.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.891 Nvme0n1 : 10.00 23526.33 91.90 0.00 0.00 5437.42 3198.78 26963.38 00:34:16.891 [2024-12-09T09:44:54.615Z] =================================================================================================================== 00:34:16.891 [2024-12-09T09:44:54.615Z] Total : 23526.33 91.90 0.00 0.00 5437.42 3198.78 26963.38 00:34:16.891 { 00:34:16.892 "results": [ 00:34:16.892 { 00:34:16.892 "job": "Nvme0n1", 00:34:16.892 "core_mask": "0x2", 00:34:16.892 "workload": "randwrite", 00:34:16.892 "status": "finished", 00:34:16.892 "queue_depth": 128, 00:34:16.892 "io_size": 4096, 00:34:16.892 "runtime": 10.003134, 00:34:16.892 "iops": 23526.32684916547, 00:34:16.892 "mibps": 91.89971425455262, 00:34:16.892 "io_failed": 0, 00:34:16.892 "io_timeout": 0, 00:34:16.892 "avg_latency_us": 5437.424901311736, 00:34:16.892 "min_latency_us": 3198.7809523809524, 00:34:16.892 "max_latency_us": 26963.382857142857 00:34:16.892 } 00:34:16.892 ], 00:34:16.892 "core_count": 1 00:34:16.892 } 00:34:16.893 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2858415 00:34:16.893 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2858415 ']' 00:34:16.893 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2858415 00:34:16.893 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:16.893 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.894 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858415 00:34:16.894 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.894 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.894 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858415' 00:34:16.894 killing process with pid 2858415 00:34:16.894 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2858415 00:34:16.895 Received shutdown signal, test time was about 10.000000 seconds 00:34:16.895 00:34:16.895 Latency(us) 00:34:16.895 [2024-12-09T09:44:54.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.895 [2024-12-09T09:44:54.619Z] =================================================================================================================== 00:34:16.895 [2024-12-09T09:44:54.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:16.896 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2858415 00:34:16.896 10:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.896 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.897 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.897 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:16.897 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:16.897 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:16.897 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2855327 00:34:16.898 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2855327 00:34:16.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2855327 Killed "${NVMF_APP[@]}" "$@" 00:34:16.898 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:16.898 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:16.899 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.899 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.899 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.899 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2860450 00:34:16.899 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2860450 00:34:16.900 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:16.900 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2860450 ']' 00:34:16.900 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.900 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.901 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.901 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.901 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.902 [2024-12-09 10:44:08.572927] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.902 [2024-12-09 10:44:08.573874] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:16.902 [2024-12-09 10:44:08.573909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.902 [2024-12-09 10:44:08.654101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.903 [2024-12-09 10:44:08.694162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.903 [2024-12-09 10:44:08.694199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.903 [2024-12-09 10:44:08.694206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.903 [2024-12-09 10:44:08.694212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.903 [2024-12-09 10:44:08.694217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.903 [2024-12-09 10:44:08.694779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.904 [2024-12-09 10:44:08.763353] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.904 [2024-12-09 10:44:08.763550] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.904 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.904 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:16.904 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.905 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.905 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.905 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.905 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:16.906 [2024-12-09 10:44:09.008203] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:16.906 [2024-12-09 10:44:09.008400] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:16.906 [2024-12-09 10:44:09.008485] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:16.906 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:16.906 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.907 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.907 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:16.907 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:16.907 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:16.907 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:16.908 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:16.908 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 -t 2000 00:34:16.908 [ 00:34:16.908 { 00:34:16.908 "name": "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5", 00:34:16.908 "aliases": [ 00:34:16.908 "lvs/lvol" 00:34:16.908 ], 00:34:16.908 "product_name": "Logical Volume", 00:34:16.908 "block_size": 4096, 00:34:16.908 "num_blocks": 38912, 00:34:16.909 "uuid": "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5", 00:34:16.909 "assigned_rate_limits": { 00:34:16.909 "rw_ios_per_sec": 0, 00:34:16.909 "rw_mbytes_per_sec": 0, 00:34:16.909 "r_mbytes_per_sec": 0, 00:34:16.909 "w_mbytes_per_sec": 0 00:34:16.909 }, 00:34:16.909 "claimed": false, 00:34:16.909 "zoned": false, 00:34:16.909 "supported_io_types": { 00:34:16.909 "read": true, 00:34:16.909 "write": true, 00:34:16.909 "unmap": true, 00:34:16.909 "flush": false, 00:34:16.909 "reset": true, 00:34:16.909 "nvme_admin": false, 00:34:16.909 "nvme_io": false, 00:34:16.909 "nvme_io_md": false, 00:34:16.909 "write_zeroes": true, 00:34:16.909 "zcopy": false, 00:34:16.909 "get_zone_info": false, 00:34:16.909 "zone_management": false, 00:34:16.910 "zone_append": false, 00:34:16.910 "compare": false, 00:34:16.910 "compare_and_write": false, 00:34:16.910 "abort": false, 00:34:16.910 "seek_hole": true, 00:34:16.910 "seek_data": true, 00:34:16.910 "copy": false, 00:34:16.910 "nvme_iov_md": false 00:34:16.910 }, 00:34:16.910 "driver_specific": { 00:34:16.910 "lvol": { 00:34:16.910 "lvol_store_uuid": "3b6c10fe-6223-4e32-8bba-4d9e99a52427", 00:34:16.910 "base_bdev": "aio_bdev", 00:34:16.910 "thin_provision": false, 00:34:16.910 "num_allocated_clusters": 38, 00:34:16.910 "snapshot": false, 00:34:16.910 "clone": false, 00:34:16.910 "esnap_clone": false 00:34:16.910 } 00:34:16.910 } 00:34:16.910 } 00:34:16.910 ] 00:34:16.911 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:16.911 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.911 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:16.911 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:16.912 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.912 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:16.913 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:16.913 10:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:16.913 [2024-12-09 10:44:09.975290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:16.914 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.914 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:16.915 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.915 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.916 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.916 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.916 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.917 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.917 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.917 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:16.918 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:16.918 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.918 request: 00:34:16.918 { 00:34:16.918 "uuid": "3b6c10fe-6223-4e32-8bba-4d9e99a52427", 00:34:16.918 "method": "bdev_lvol_get_lvstores", 00:34:16.918 "req_id": 1 00:34:16.918 } 00:34:16.919 Got JSON-RPC error response 00:34:16.919 response: 00:34:16.919 { 00:34:16.919 "code": -19, 00:34:16.919 "message": "No such device" 00:34:16.919 } 00:34:16.919 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:16.919 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.920 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.920 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.921 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:16.921 aio_bdev 00:34:16.921 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.921 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.922 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:16.922 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:16.922 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:16.923 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:16.923 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:16.924 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 -t 2000 00:34:16.924 [ 00:34:16.924 { 00:34:16.924 "name": "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5", 00:34:16.924 "aliases": [ 00:34:16.924 "lvs/lvol" 00:34:16.924 ], 00:34:16.924 "product_name": "Logical Volume", 00:34:16.924 "block_size": 4096, 00:34:16.924 "num_blocks": 38912, 00:34:16.924 "uuid": "7df5e5d8-e54e-43cd-8e92-f8dcb52279e5", 00:34:16.924 "assigned_rate_limits": { 00:34:16.924 "rw_ios_per_sec": 0, 00:34:16.924 "rw_mbytes_per_sec": 0, 00:34:16.924 "r_mbytes_per_sec": 0, 00:34:16.924 "w_mbytes_per_sec": 0 00:34:16.925 }, 00:34:16.925 "claimed": false, 00:34:16.925 "zoned": false, 00:34:16.925 "supported_io_types": { 00:34:16.925 "read": true, 00:34:16.925 "write": true, 00:34:16.925 "unmap": true, 00:34:16.925 "flush": false, 00:34:16.925 "reset": true, 00:34:16.925 "nvme_admin": false, 00:34:16.925 "nvme_io": false, 00:34:16.925 "nvme_io_md": false, 00:34:16.925 "write_zeroes": true, 00:34:16.925 "zcopy": false, 00:34:16.925 "get_zone_info": false, 00:34:16.925 "zone_management": false, 00:34:16.925 "zone_append": false, 00:34:16.926 "compare": false, 00:34:16.926 "compare_and_write": false, 00:34:16.926 "abort": false, 00:34:16.926 "seek_hole": true, 00:34:16.926 "seek_data": true, 00:34:16.926 "copy": false, 00:34:16.926 "nvme_iov_md": false 00:34:16.926 }, 00:34:16.926 "driver_specific": { 00:34:16.926 "lvol": { 00:34:16.926 "lvol_store_uuid": "3b6c10fe-6223-4e32-8bba-4d9e99a52427", 00:34:16.926 "base_bdev": "aio_bdev", 00:34:16.926 "thin_provision": false, 00:34:16.926 "num_allocated_clusters": 38, 00:34:16.926 "snapshot": false, 00:34:16.926 "clone": false, 00:34:16.926 "esnap_clone": false 00:34:16.926 } 00:34:16.926 } 00:34:16.926 } 00:34:16.926 ] 00:34:16.927 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:16.927 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.927 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:16.928 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:16.928 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.928 10:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:16.928 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:16.929 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7df5e5d8-e54e-43cd-8e92-f8dcb52279e5 00:34:16.929 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b6c10fe-6223-4e32-8bba-4d9e99a52427 00:34:16.930 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:16.930 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:16.930 00:34:16.930 real 0m16.907s 00:34:16.930 user 0m34.461s 00:34:16.930 sys 0m3.740s 00:34:16.930 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.931 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.931 ************************************ 00:34:16.931 END TEST lvs_grow_dirty 00:34:16.931 ************************************ 00:34:16.931 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:16.931 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:16.931 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:16.931 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:16.932 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:16.932 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:16.932 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:16.932 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:16.933 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:16.933 nvmf_trace.0 00:34:16.933 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:16.933 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:16.933 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.934 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:16.934 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.934 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:16.934 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.934 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.934 rmmod nvme_tcp 00:34:16.934 rmmod nvme_fabrics 00:34:16.934 rmmod nvme_keyring 00:34:16.935 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.935 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:16.935 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:16.935 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2860450 ']' 00:34:16.935 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2860450 00:34:16.936 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2860450 ']' 00:34:16.936 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2860450 00:34:16.936 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:16.936 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.936 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860450 00:34:16.937 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.937 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.937 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860450' 00:34:16.937 killing process with pid 2860450 00:34:16.937 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2860450 00:34:16.938 10:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2860450 00:34:16.938 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.938 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:16.938 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:16.938 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:16.938 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:16.939 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:16.939 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:16.939 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:16.939 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:16.940 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.940 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.940 10:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.940 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.940 00:34:16.940 real 0m42.427s 00:34:16.940 user 0m52.330s 00:34:16.940 sys 0m10.174s 00:34:16.940 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.941 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:16.941 ************************************ 00:34:16.941 END TEST nvmf_lvs_grow 00:34:16.941 ************************************ 00:34:16.941 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:16.941 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:16.941 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.942 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:16.942 ************************************ 00:34:16.942 START TEST nvmf_bdev_io_wait 00:34:16.942 ************************************ 00:34:16.942 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:16.942 * Looking for test storage... 00:34:16.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.943 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:16.943 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:34:16.943 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:16.943 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:16.943 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.944 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.945 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.945 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.945 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.945 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.945 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:16.946 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:16.946 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.946 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.947 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:16.947 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:16.947 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.947 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:16.948 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.948 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:16.948 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:16.948 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.949 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:16.949 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.949 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.949 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.950 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:16.950 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.950 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.951 --rc genhtml_branch_coverage=1 00:34:16.951 --rc genhtml_function_coverage=1 00:34:16.951 --rc genhtml_legend=1 00:34:16.951 --rc geninfo_all_blocks=1 00:34:16.951 --rc geninfo_unexecuted_blocks=1 00:34:16.951 00:34:16.951 ' 00:34:16.951 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:16.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.951 --rc genhtml_branch_coverage=1 00:34:16.952 --rc genhtml_function_coverage=1 00:34:16.952 --rc genhtml_legend=1 00:34:16.952 --rc geninfo_all_blocks=1 00:34:16.952 --rc geninfo_unexecuted_blocks=1 00:34:16.952 00:34:16.952 ' 00:34:16.952 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:16.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.952 --rc genhtml_branch_coverage=1 00:34:16.952 --rc genhtml_function_coverage=1 00:34:16.952 --rc genhtml_legend=1 00:34:16.953 --rc geninfo_all_blocks=1 00:34:16.953 --rc geninfo_unexecuted_blocks=1 00:34:16.953 00:34:16.953 ' 00:34:16.953 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:16.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.953 --rc genhtml_branch_coverage=1 00:34:16.953 --rc genhtml_function_coverage=1 00:34:16.953 --rc genhtml_legend=1 00:34:16.953 --rc geninfo_all_blocks=1 00:34:16.953 --rc geninfo_unexecuted_blocks=1 00:34:16.953 00:34:16.953 ' 00:34:16.954 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.954 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:16.954 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.955 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.955 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.955 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.956 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.956 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.956 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.956 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.957 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.957 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.957 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:16.958 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:16.958 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.959 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.959 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.959 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.960 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.960 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.961 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.961 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.962 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.963 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.965 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.967 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.968 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:16.970 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.970 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:16.970 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.971 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.971 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.971 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.972 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.972 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:16.972 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:16.975 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.975 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.976 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.976 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:16.976 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:16.977 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:16.977 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.977 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.978 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.978 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.978 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.979 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.979 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.980 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.980 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.980 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.981 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.981 10:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:16.981 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.982 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.982 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.982 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.983 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.983 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.983 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.984 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.984 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.984 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:16.985 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.985 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:16.985 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.985 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:16.986 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.986 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.987 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.987 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.987 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.988 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.988 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.989 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.989 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.989 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.990 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.990 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.991 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.991 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.991 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.992 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.992 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.992 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.992 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.993 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.993 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:16.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:16.994 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.994 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.994 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.995 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.995 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.995 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.996 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:16.996 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:16.996 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.996 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.997 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.997 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.997 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.998 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.998 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.998 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.999 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.999 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.000 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.000 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.000 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.000 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.001 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.001 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:17.001 Found net devices under 0000:86:00.0: cvl_0_0 00:34:17.002 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.002 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.003 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.003 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.003 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.004 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.004 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.004 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.005 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:17.005 Found net devices under 0000:86:00.1: cvl_0_1 00:34:17.005 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.006 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.006 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.006 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.006 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.007 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.007 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.007 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.008 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.008 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.009 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.009 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.009 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.010 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.010 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.010 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.011 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.011 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.011 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.012 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.012 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.012 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.013 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.013 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.014 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.014 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.014 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.015 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.016 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:34:17.016 00:34:17.016 --- 10.0.0.2 ping statistics --- 00:34:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.016 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:34:17.017 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:34:17.017 00:34:17.017 --- 10.0.0.1 ping statistics --- 00:34:17.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.017 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:34:17.018 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.018 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:17.018 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.019 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.019 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.019 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.020 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.020 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.020 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.021 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:17.021 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.021 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.022 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.022 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2864498 00:34:17.022 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2864498 00:34:17.023 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2864498 ']' 00:34:17.023 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.023 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.024 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.025 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:17.025 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.026 10:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.026 [2024-12-09 10:44:20.488890] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:17.026 [2024-12-09 10:44:20.489877] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.027 [2024-12-09 10:44:20.489916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.027 [2024-12-09 10:44:20.575451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.028 [2024-12-09 10:44:20.621568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.028 [2024-12-09 10:44:20.621599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.028 [2024-12-09 10:44:20.621608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.029 [2024-12-09 10:44:20.621615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.029 [2024-12-09 10:44:20.621620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.030 [2024-12-09 10:44:20.623040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.030 [2024-12-09 10:44:20.623151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.030 [2024-12-09 10:44:20.623167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.030 [2024-12-09 10:44:20.623171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.031 [2024-12-09 10:44:20.623573] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:17.031 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.031 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:17.032 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.032 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.033 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.033 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.034 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:17.034 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.034 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.035 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.035 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:17.035 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.036 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.036 [2024-12-09 10:44:21.423724] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.036 [2024-12-09 10:44:21.424030] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:17.037 [2024-12-09 10:44:21.424061] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:17.037 [2024-12-09 10:44:21.424451] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:17.038 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.038 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.039 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.039 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.039 [2024-12-09 10:44:21.435745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.040 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.040 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.041 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.041 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.041 Malloc0 00:34:17.041 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.042 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.042 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.043 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.043 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.044 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.044 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.044 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.045 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.045 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.046 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.046 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.047 [2024-12-09 10:44:21.504023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.047 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.047 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2864744 00:34:17.048 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:17.049 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:17.049 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2864746 00:34:17.049 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.050 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.050 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.050 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.050 { 00:34:17.050 "params": { 00:34:17.051 "name": "Nvme$subsystem", 00:34:17.051 "trtype": "$TEST_TRANSPORT", 00:34:17.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.051 "adrfam": "ipv4", 00:34:17.051 "trsvcid": "$NVMF_PORT", 00:34:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.051 "hdgst": ${hdgst:-false}, 00:34:17.051 "ddgst": ${ddgst:-false} 00:34:17.051 }, 00:34:17.051 "method": "bdev_nvme_attach_controller" 00:34:17.051 } 00:34:17.051 EOF 00:34:17.052 )") 00:34:17.052 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:17.053 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:17.053 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2864748 00:34:17.053 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.053 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.054 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.054 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.054 { 00:34:17.054 "params": { 00:34:17.054 "name": "Nvme$subsystem", 00:34:17.055 "trtype": "$TEST_TRANSPORT", 00:34:17.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.055 "adrfam": "ipv4", 00:34:17.055 "trsvcid": "$NVMF_PORT", 00:34:17.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.055 "hdgst": ${hdgst:-false}, 00:34:17.056 "ddgst": ${ddgst:-false} 00:34:17.056 }, 00:34:17.056 "method": "bdev_nvme_attach_controller" 00:34:17.056 } 00:34:17.056 EOF 00:34:17.056 )") 00:34:17.057 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:17.057 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2864752 00:34:17.057 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:17.058 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.058 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:17.058 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.059 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.059 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.059 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:17.060 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.060 { 00:34:17.060 "params": { 00:34:17.060 "name": "Nvme$subsystem", 00:34:17.060 "trtype": "$TEST_TRANSPORT", 00:34:17.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.060 "adrfam": "ipv4", 00:34:17.060 "trsvcid": "$NVMF_PORT", 00:34:17.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.061 "hdgst": ${hdgst:-false}, 00:34:17.061 "ddgst": ${ddgst:-false} 00:34:17.061 }, 00:34:17.061 "method": "bdev_nvme_attach_controller" 00:34:17.061 } 00:34:17.061 EOF 00:34:17.061 )") 00:34:17.061 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:17.062 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.062 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.062 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.063 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.063 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.063 { 00:34:17.063 "params": { 00:34:17.063 "name": "Nvme$subsystem", 00:34:17.063 "trtype": "$TEST_TRANSPORT", 00:34:17.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.063 "adrfam": "ipv4", 00:34:17.063 "trsvcid": "$NVMF_PORT", 00:34:17.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.064 "hdgst": ${hdgst:-false}, 00:34:17.064 "ddgst": ${ddgst:-false} 00:34:17.064 }, 00:34:17.064 "method": "bdev_nvme_attach_controller" 00:34:17.064 } 00:34:17.064 EOF 00:34:17.064 )") 00:34:17.064 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.064 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2864744 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.065 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.065 "params": { 00:34:17.065 "name": "Nvme1", 00:34:17.065 "trtype": "tcp", 00:34:17.065 "traddr": "10.0.0.2", 00:34:17.065 "adrfam": "ipv4", 00:34:17.065 "trsvcid": "4420", 00:34:17.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.065 "hdgst": false, 00:34:17.066 "ddgst": false 00:34:17.066 }, 00:34:17.066 "method": "bdev_nvme_attach_controller" 00:34:17.066 }' 00:34:17.066 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.066 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.066 "params": { 00:34:17.066 "name": "Nvme1", 00:34:17.066 "trtype": "tcp", 00:34:17.067 "traddr": "10.0.0.2", 00:34:17.067 "adrfam": "ipv4", 00:34:17.067 "trsvcid": "4420", 00:34:17.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.067 "hdgst": false, 00:34:17.067 "ddgst": false 00:34:17.067 }, 00:34:17.067 "method": "bdev_nvme_attach_controller" 00:34:17.067 }' 00:34:17.067 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.068 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.068 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.068 "params": { 00:34:17.068 "name": "Nvme1", 00:34:17.068 "trtype": "tcp", 00:34:17.068 "traddr": "10.0.0.2", 00:34:17.068 "adrfam": "ipv4", 00:34:17.068 "trsvcid": "4420", 00:34:17.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.069 "hdgst": false, 00:34:17.069 "ddgst": false 00:34:17.069 }, 00:34:17.069 "method": "bdev_nvme_attach_controller" 00:34:17.069 }' 00:34:17.069 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.069 10:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.069 "params": { 00:34:17.069 "name": "Nvme1", 00:34:17.069 "trtype": "tcp", 00:34:17.070 "traddr": "10.0.0.2", 00:34:17.070 "adrfam": "ipv4", 00:34:17.070 "trsvcid": "4420", 00:34:17.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.070 "hdgst": false, 00:34:17.070 "ddgst": false 00:34:17.070 }, 00:34:17.070 "method": "bdev_nvme_attach_controller" 00:34:17.070 }' 00:34:17.070 [2024-12-09 10:44:21.556965] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.071 [2024-12-09 10:44:21.557016] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:17.071 [2024-12-09 10:44:21.557899] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.072 [2024-12-09 10:44:21.557941] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:17.072 [2024-12-09 10:44:21.558304] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.073 [2024-12-09 10:44:21.558341] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:17.073 [2024-12-09 10:44:21.561885] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.074 [2024-12-09 10:44:21.561930] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:17.074 [2024-12-09 10:44:21.746394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.074 [2024-12-09 10:44:21.788853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.075 [2024-12-09 10:44:21.840799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.075 [2024-12-09 10:44:21.883215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.075 [2024-12-09 10:44:21.933000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.075 [2024-12-09 10:44:21.992977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.075 [2024-12-09 10:44:21.993068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.076 [2024-12-09 10:44:22.032898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.076 Running I/O for 1 seconds... 00:34:17.076 Running I/O for 1 seconds... 00:34:17.076 Running I/O for 1 seconds... 00:34:17.076 Running I/O for 1 seconds... 00:34:17.076 8353.00 IOPS, 32.63 MiB/s 00:34:17.076 Latency(us) 00:34:17.076 [2024-12-09T09:44:54.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.077 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:17.077 Nvme1n1 : 1.02 8352.39 32.63 0.00 0.00 15212.65 3479.65 20472.20 00:34:17.077 [2024-12-09T09:44:54.801Z] =================================================================================================================== 00:34:17.077 [2024-12-09T09:44:54.801Z] Total : 8352.39 32.63 0.00 0.00 15212.65 3479.65 20472.20 00:34:17.077 13322.00 IOPS, 52.04 MiB/s 00:34:17.100 Latency(us) 00:34:17.100 [2024-12-09T09:44:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.101 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:17.101 Nvme1n1 : 1.01 13390.21 52.31 0.00 0.00 9531.93 1927.07 13981.01 00:34:17.101 [2024-12-09T09:44:54.825Z] =================================================================================================================== 00:34:17.101 [2024-12-09T09:44:54.825Z] Total : 13390.21 52.31 0.00 0.00 9531.93 1927.07 13981.01 00:34:17.101 8381.00 IOPS, 32.74 MiB/s 00:34:17.102 Latency(us) 00:34:17.102 [2024-12-09T09:44:54.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.102 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:17.102 Nvme1n1 : 1.01 8510.10 33.24 0.00 0.00 15008.61 3073.95 32206.26 00:34:17.102 [2024-12-09T09:44:54.826Z] =================================================================================================================== 00:34:17.102 [2024-12-09T09:44:54.827Z] Total : 8510.10 33.24 0.00 0.00 15008.61 3073.95 32206.26 00:34:17.103 244168.00 IOPS, 953.78 MiB/s 00:34:17.103 Latency(us) 00:34:17.103 [2024-12-09T09:44:54.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.103 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:17.104 Nvme1n1 : 1.00 243792.51 952.31 0.00 0.00 522.84 224.30 1513.57 00:34:17.104 [2024-12-09T09:44:54.828Z] =================================================================================================================== 00:34:17.104 [2024-12-09T09:44:54.828Z] Total : 243792.51 952.31 0.00 0.00 522.84 224.30 1513.57 00:34:17.104 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2864746 00:34:17.105 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2864748 00:34:17.105 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2864752 00:34:17.105 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.105 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.106 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.106 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.106 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:17.106 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:17.107 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.107 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:17.107 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.107 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:17.108 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.108 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.108 rmmod nvme_tcp 00:34:17.108 rmmod nvme_fabrics 00:34:17.108 rmmod nvme_keyring 00:34:17.108 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.108 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:17.109 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:17.109 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2864498 ']' 00:34:17.109 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2864498 00:34:17.109 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2864498 ']' 00:34:17.110 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2864498 00:34:17.110 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:17.110 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.111 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864498 00:34:17.111 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:17.111 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:17.111 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864498' 00:34:17.111 killing process with pid 2864498 00:34:17.112 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2864498 00:34:17.112 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2864498 00:34:17.112 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.112 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.113 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.113 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:17.113 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:17.113 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.114 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.114 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.114 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.115 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.115 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.115 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.115 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.115 00:34:17.115 real 0m11.464s 00:34:17.115 user 0m15.267s 00:34:17.115 sys 0m6.472s 00:34:17.116 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.116 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.116 ************************************ 00:34:17.116 END TEST nvmf_bdev_io_wait 00:34:17.116 ************************************ 00:34:17.117 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:17.117 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:17.117 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.117 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:17.117 ************************************ 00:34:17.117 START TEST nvmf_queue_depth 00:34:17.118 ************************************ 00:34:17.118 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:17.118 * Looking for test storage... 00:34:17.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.119 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:17.119 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:34:17.119 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:17.119 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:17.120 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.120 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.120 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.120 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.121 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.121 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.121 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.122 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.122 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.122 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.122 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.123 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:17.123 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:17.123 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.123 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.124 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:17.124 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:17.124 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.124 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:17.125 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.125 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:17.125 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:17.125 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.126 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:17.126 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.126 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.126 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.127 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:17.127 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.127 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:17.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.127 --rc genhtml_branch_coverage=1 00:34:17.128 --rc genhtml_function_coverage=1 00:34:17.128 --rc genhtml_legend=1 00:34:17.128 --rc geninfo_all_blocks=1 00:34:17.128 --rc geninfo_unexecuted_blocks=1 00:34:17.128 00:34:17.128 ' 00:34:17.128 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:17.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.128 --rc genhtml_branch_coverage=1 00:34:17.128 --rc genhtml_function_coverage=1 00:34:17.128 --rc genhtml_legend=1 00:34:17.128 --rc geninfo_all_blocks=1 00:34:17.128 --rc geninfo_unexecuted_blocks=1 00:34:17.128 00:34:17.128 ' 00:34:17.129 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:17.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.129 --rc genhtml_branch_coverage=1 00:34:17.129 --rc genhtml_function_coverage=1 00:34:17.129 --rc genhtml_legend=1 00:34:17.129 --rc geninfo_all_blocks=1 00:34:17.129 --rc geninfo_unexecuted_blocks=1 00:34:17.129 00:34:17.129 ' 00:34:17.129 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.130 --rc genhtml_branch_coverage=1 00:34:17.130 --rc genhtml_function_coverage=1 00:34:17.130 --rc genhtml_legend=1 00:34:17.130 --rc geninfo_all_blocks=1 00:34:17.130 --rc geninfo_unexecuted_blocks=1 00:34:17.130 00:34:17.130 ' 00:34:17.130 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.131 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:17.131 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.131 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.131 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.132 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.132 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.132 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.132 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.133 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.133 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.133 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.133 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.134 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.134 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.134 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.135 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.135 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.135 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.136 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.136 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.136 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.136 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.138 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.139 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.141 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.141 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:17.142 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.143 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:17.143 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.143 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.143 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.144 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.144 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.144 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:17.144 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:17.145 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.145 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.145 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.145 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:17.146 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:17.146 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:17.146 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:17.146 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.147 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.147 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.147 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.147 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.148 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.148 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.148 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.149 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.149 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.149 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.149 10:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.150 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.150 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.150 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.150 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.151 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.151 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.152 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.153 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:17.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.154 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:17.155 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.155 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.156 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.159 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.159 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.160 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.161 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:17.161 Found net devices under 0000:86:00.0: cvl_0_0 00:34:17.161 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.161 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.162 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.162 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.162 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.163 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.163 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.163 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.163 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:17.164 Found net devices under 0000:86:00.1: cvl_0_1 00:34:17.164 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.164 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.164 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.165 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.165 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.165 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.165 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.166 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.166 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.166 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.166 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.167 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.167 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.167 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.168 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.168 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.168 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.168 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.169 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.169 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.169 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.170 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.170 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.170 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.171 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.171 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.172 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.172 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.172 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:34:17.173 00:34:17.173 --- 10.0.0.2 ping statistics --- 00:34:17.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.173 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:34:17.173 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:34:17.173 00:34:17.173 --- 10.0.0.1 ping statistics --- 00:34:17.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.173 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:34:17.174 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.174 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:17.174 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.174 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.174 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:17.175 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2868531 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2868531 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:17.176 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868531 ']' 00:34:17.177 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.177 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.177 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.177 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.178 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.178 [2024-12-09 10:44:31.999029] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:17.178 [2024-12-09 10:44:31.999943] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.178 [2024-12-09 10:44:31.999977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.179 [2024-12-09 10:44:32.067161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.179 [2024-12-09 10:44:32.105990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.179 [2024-12-09 10:44:32.106026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.179 [2024-12-09 10:44:32.106034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.179 [2024-12-09 10:44:32.106053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.179 [2024-12-09 10:44:32.106058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.179 [2024-12-09 10:44:32.106614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.180 [2024-12-09 10:44:32.172910] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:17.180 [2024-12-09 10:44:32.173132] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.180 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.180 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:17.180 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.180 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.181 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.181 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.181 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.181 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.181 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.182 [2024-12-09 10:44:32.251304] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.182 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.182 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.182 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.182 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.182 Malloc0 00:34:17.183 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.183 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.183 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.183 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.183 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.184 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.185 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.185 [2024-12-09 10:44:32.319430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.185 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.185 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2868553 00:34:17.185 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2868553 /var/tmp/bdevperf.sock 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868553 ']' 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.186 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:17.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:17.187 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.187 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.187 [2024-12-09 10:44:32.367527] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:17.187 [2024-12-09 10:44:32.367569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868553 ] 00:34:17.187 [2024-12-09 10:44:32.441657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.187 [2024-12-09 10:44:32.483593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.188 NVMe0n1 00:34:17.188 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.189 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:17.189 Running I/O for 10 seconds... 00:34:17.189 12077.00 IOPS, 47.18 MiB/s [2024-12-09T09:44:54.913Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-09T09:44:54.913Z] 12300.67 IOPS, 48.05 MiB/s [2024-12-09T09:44:54.913Z] 12320.00 IOPS, 48.12 MiB/s [2024-12-09T09:44:54.913Z] 12447.00 IOPS, 48.62 MiB/s [2024-12-09T09:44:54.913Z] 12455.17 IOPS, 48.65 MiB/s [2024-12-09T09:44:54.913Z] 12472.43 IOPS, 48.72 MiB/s [2024-12-09T09:44:54.913Z] 12475.62 IOPS, 48.73 MiB/s [2024-12-09T09:44:54.913Z] 12512.00 IOPS, 48.88 MiB/s [2024-12-09T09:44:54.913Z] 12499.60 IOPS, 48.83 MiB/s 00:34:17.189 Latency(us) 00:34:17.189 [2024-12-09T09:44:54.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:17.190 Verification LBA range: start 0x0 length 0x4000 00:34:17.190 NVMe0n1 : 10.05 12540.42 48.99 0.00 0.00 81394.82 8113.98 51679.82 00:34:17.190 [2024-12-09T09:44:54.914Z] =================================================================================================================== 00:34:17.190 [2024-12-09T09:44:54.914Z] Total : 12540.42 48.99 0.00 0.00 81394.82 8113.98 51679.82 00:34:17.190 { 00:34:17.190 "results": [ 00:34:17.190 { 00:34:17.190 "job": "NVMe0n1", 00:34:17.190 "core_mask": "0x1", 00:34:17.190 "workload": "verify", 00:34:17.190 "status": "finished", 00:34:17.190 "verify_range": { 00:34:17.190 "start": 0, 00:34:17.190 "length": 16384 00:34:17.190 }, 00:34:17.190 "queue_depth": 1024, 00:34:17.190 "io_size": 4096, 00:34:17.190 "runtime": 10.048548, 00:34:17.190 "iops": 12540.418774931462, 00:34:17.190 "mibps": 48.986010839576025, 00:34:17.191 "io_failed": 0, 00:34:17.191 "io_timeout": 0, 00:34:17.191 "avg_latency_us": 81394.82227836661, 00:34:17.191 "min_latency_us": 8113.980952380953, 00:34:17.191 "max_latency_us": 51679.817142857144 00:34:17.191 } 00:34:17.191 ], 00:34:17.191 "core_count": 1 00:34:17.191 } 00:34:17.191 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2868553 00:34:17.191 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868553 ']' 00:34:17.191 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868553 00:34:17.191 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:17.191 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.192 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868553 00:34:17.192 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:17.192 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:17.192 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868553' 00:34:17.192 killing process with pid 2868553 00:34:17.192 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868553 00:34:17.192 Received shutdown signal, test time was about 10.000000 seconds 00:34:17.192 00:34:17.192 Latency(us) 00:34:17.192 [2024-12-09T09:44:54.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.193 [2024-12-09T09:44:54.917Z] =================================================================================================================== 00:34:17.193 [2024-12-09T09:44:54.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.193 10:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868553 00:34:17.193 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:17.193 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:17.193 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.193 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.194 rmmod nvme_tcp 00:34:17.194 rmmod nvme_fabrics 00:34:17.194 rmmod nvme_keyring 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.194 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2868531 ']' 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2868531 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868531 ']' 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868531 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.195 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868531 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868531' 00:34:17.196 killing process with pid 2868531 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868531 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868531 00:34:17.196 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.197 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.198 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.198 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.198 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.198 10:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.198 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.198 00:34:17.198 real 0m19.637s 00:34:17.198 user 0m22.642s 00:34:17.198 sys 0m6.244s 00:34:17.198 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.199 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.199 ************************************ 00:34:17.199 END TEST nvmf_queue_depth 00:34:17.199 ************************************ 00:34:17.199 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:17.199 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:17.199 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.199 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:17.199 ************************************ 00:34:17.199 START TEST nvmf_target_multipath 00:34:17.199 ************************************ 00:34:17.200 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:17.200 * Looking for test storage... 00:34:17.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.200 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:17.200 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:34:17.200 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:17.200 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.201 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.202 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.203 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.204 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:17.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.204 --rc genhtml_branch_coverage=1 00:34:17.205 --rc genhtml_function_coverage=1 00:34:17.205 --rc genhtml_legend=1 00:34:17.205 --rc geninfo_all_blocks=1 00:34:17.205 --rc geninfo_unexecuted_blocks=1 00:34:17.205 00:34:17.205 ' 00:34:17.205 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:17.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.205 --rc genhtml_branch_coverage=1 00:34:17.205 --rc genhtml_function_coverage=1 00:34:17.205 --rc genhtml_legend=1 00:34:17.205 --rc geninfo_all_blocks=1 00:34:17.205 --rc geninfo_unexecuted_blocks=1 00:34:17.205 00:34:17.205 ' 00:34:17.205 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:17.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.205 --rc genhtml_branch_coverage=1 00:34:17.205 --rc genhtml_function_coverage=1 00:34:17.205 --rc genhtml_legend=1 00:34:17.205 --rc geninfo_all_blocks=1 00:34:17.206 --rc geninfo_unexecuted_blocks=1 00:34:17.206 00:34:17.206 ' 00:34:17.206 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:17.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.206 --rc genhtml_branch_coverage=1 00:34:17.206 --rc genhtml_function_coverage=1 00:34:17.206 --rc genhtml_legend=1 00:34:17.206 --rc geninfo_all_blocks=1 00:34:17.206 --rc geninfo_unexecuted_blocks=1 00:34:17.206 00:34:17.206 ' 00:34:17.206 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.206 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.207 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.208 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.209 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.210 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.210 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.211 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.211 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:17.211 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.211 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.212 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.213 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.214 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:17.214 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.215 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.216 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.217 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:17.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.218 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:17.219 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.219 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.220 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:17.221 Found net devices under 0000:86:00.0: cvl_0_0 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.221 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:17.222 Found net devices under 0000:86:00.1: cvl_0_1 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.222 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.223 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.224 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.225 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.225 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.225 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.225 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.225 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:34:17.225 00:34:17.225 --- 10.0.0.2 ping statistics --- 00:34:17.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.225 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:34:17.226 00:34:17.226 --- 10.0.0.1 ping statistics --- 00:34:17.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.226 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.226 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:17.227 only one NIC for nvmf test 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.227 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.228 rmmod nvme_tcp 00:34:17.228 rmmod nvme_fabrics 00:34:17.228 rmmod nvme_keyring 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.228 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.229 10:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.229 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:17.230 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.231 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.232 00:34:17.232 real 0m8.292s 00:34:17.232 user 0m1.751s 00:34:17.232 sys 0m4.550s 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:17.232 ************************************ 00:34:17.232 END TEST nvmf_target_multipath 00:34:17.232 ************************************ 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:17.232 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:17.233 ************************************ 00:34:17.233 START TEST nvmf_zcopy 00:34:17.233 ************************************ 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:17.233 * Looking for test storage... 00:34:17.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:34:17.233 10:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:17.233 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.234 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:17.235 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.236 --rc genhtml_branch_coverage=1 00:34:17.236 --rc genhtml_function_coverage=1 00:34:17.236 --rc genhtml_legend=1 00:34:17.236 --rc geninfo_all_blocks=1 00:34:17.236 --rc geninfo_unexecuted_blocks=1 00:34:17.236 00:34:17.236 ' 00:34:17.236 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:17.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.237 --rc genhtml_branch_coverage=1 00:34:17.237 --rc genhtml_function_coverage=1 00:34:17.237 --rc genhtml_legend=1 00:34:17.237 --rc geninfo_all_blocks=1 00:34:17.237 --rc geninfo_unexecuted_blocks=1 00:34:17.237 00:34:17.237 ' 00:34:17.237 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:17.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.237 --rc genhtml_branch_coverage=1 00:34:17.237 --rc genhtml_function_coverage=1 00:34:17.237 --rc genhtml_legend=1 00:34:17.237 --rc geninfo_all_blocks=1 00:34:17.237 --rc geninfo_unexecuted_blocks=1 00:34:17.237 00:34:17.237 ' 00:34:17.237 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:17.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.237 --rc genhtml_branch_coverage=1 00:34:17.237 --rc genhtml_function_coverage=1 00:34:17.237 --rc genhtml_legend=1 00:34:17.237 --rc geninfo_all_blocks=1 00:34:17.237 --rc geninfo_unexecuted_blocks=1 00:34:17.237 00:34:17.237 ' 00:34:17.237 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.237 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.238 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.239 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.240 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.240 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.240 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:17.241 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.242 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.243 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.243 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.243 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.274 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.274 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.275 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.633 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:22.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:22.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:22.634 Found net devices under 0000:86:00.0: cvl_0_0 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:22.634 Found net devices under 0000:86:00.1: cvl_0_1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:34:22.634 00:34:22.634 --- 10.0.0.2 ping statistics --- 00:34:22.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.634 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:34:22.634 00:34:22.634 --- 10.0.0.1 ping statistics --- 00:34:22.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.634 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:22.634 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2877197 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2877197 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2877197 ']' 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.635 10:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 [2024-12-09 10:45:00.026471] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.635 [2024-12-09 10:45:00.027464] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:22.635 [2024-12-09 10:45:00.027504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.635 [2024-12-09 10:45:00.111132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.635 [2024-12-09 10:45:00.151949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.635 [2024-12-09 10:45:00.151985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.635 [2024-12-09 10:45:00.151993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.635 [2024-12-09 10:45:00.151998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.635 [2024-12-09 10:45:00.152003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.635 [2024-12-09 10:45:00.152547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.635 [2024-12-09 10:45:00.221574] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:22.635 [2024-12-09 10:45:00.221777] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 [2024-12-09 10:45:00.289279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 [2024-12-09 10:45:00.317523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.635 malloc0 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.635 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.897 { 00:34:22.897 "params": { 00:34:22.897 "name": "Nvme$subsystem", 00:34:22.897 "trtype": "$TEST_TRANSPORT", 00:34:22.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.897 "adrfam": "ipv4", 00:34:22.897 "trsvcid": "$NVMF_PORT", 00:34:22.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.897 "hdgst": ${hdgst:-false}, 00:34:22.897 "ddgst": ${ddgst:-false} 00:34:22.897 }, 00:34:22.897 "method": "bdev_nvme_attach_controller" 00:34:22.897 } 00:34:22.897 EOF 00:34:22.897 )") 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:22.897 10:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:22.897 "params": { 00:34:22.897 "name": "Nvme1", 00:34:22.897 "trtype": "tcp", 00:34:22.897 "traddr": "10.0.0.2", 00:34:22.897 "adrfam": "ipv4", 00:34:22.897 "trsvcid": "4420", 00:34:22.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.897 "hdgst": false, 00:34:22.897 "ddgst": false 00:34:22.897 }, 00:34:22.897 "method": "bdev_nvme_attach_controller" 00:34:22.897 }' 00:34:22.897 [2024-12-09 10:45:00.410457] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:22.897 [2024-12-09 10:45:00.410500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877372 ] 00:34:22.897 [2024-12-09 10:45:00.484736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.897 [2024-12-09 10:45:00.525262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.158 Running I/O for 10 seconds... 00:34:25.486 8615.00 IOPS, 67.30 MiB/s [2024-12-09T09:45:04.153Z] 8649.00 IOPS, 67.57 MiB/s [2024-12-09T09:45:05.096Z] 8657.67 IOPS, 67.64 MiB/s [2024-12-09T09:45:06.039Z] 8662.25 IOPS, 67.67 MiB/s [2024-12-09T09:45:06.981Z] 8665.00 IOPS, 67.70 MiB/s [2024-12-09T09:45:07.928Z] 8678.00 IOPS, 67.80 MiB/s [2024-12-09T09:45:08.871Z] 8676.43 IOPS, 67.78 MiB/s [2024-12-09T09:45:10.258Z] 8676.62 IOPS, 67.79 MiB/s [2024-12-09T09:45:11.202Z] 8676.33 IOPS, 67.78 MiB/s [2024-12-09T09:45:11.202Z] 8675.10 IOPS, 67.77 MiB/s 00:34:33.478 Latency(us) 00:34:33.478 [2024-12-09T09:45:11.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:33.478 Verification LBA range: start 0x0 length 0x1000 00:34:33.478 Nvme1n1 : 10.01 8676.18 67.78 0.00 0.00 14710.65 436.91 21346.01 00:34:33.478 [2024-12-09T09:45:11.202Z] =================================================================================================================== 00:34:33.478 [2024-12-09T09:45:11.202Z] Total : 8676.18 67.78 0.00 0.00 14710.65 436.91 21346.01 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2879551 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.478 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.478 { 00:34:33.478 "params": { 00:34:33.478 "name": "Nvme$subsystem", 00:34:33.478 "trtype": "$TEST_TRANSPORT", 00:34:33.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.478 "adrfam": "ipv4", 00:34:33.478 "trsvcid": "$NVMF_PORT", 00:34:33.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.479 "hdgst": ${hdgst:-false}, 00:34:33.479 "ddgst": ${ddgst:-false} 00:34:33.479 }, 00:34:33.479 "method": "bdev_nvme_attach_controller" 00:34:33.479 } 00:34:33.479 EOF 00:34:33.479 )") 00:34:33.479 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:33.479 [2024-12-09 10:45:11.040900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.040934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:33.479 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:33.479 10:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.479 "params": { 00:34:33.479 "name": "Nvme1", 00:34:33.479 "trtype": "tcp", 00:34:33.479 "traddr": "10.0.0.2", 00:34:33.479 "adrfam": "ipv4", 00:34:33.479 "trsvcid": "4420", 00:34:33.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.479 "hdgst": false, 00:34:33.479 "ddgst": false 00:34:33.479 }, 00:34:33.479 "method": "bdev_nvme_attach_controller" 00:34:33.479 }' 00:34:33.479 [2024-12-09 10:45:11.052861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.052875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.064857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.064868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.076855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.076865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.080542] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:33.479 [2024-12-09 10:45:11.080585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879551 ] 00:34:33.479 [2024-12-09 10:45:11.088856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.088868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.100852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.100863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.112854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.112865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.124853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.124863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.136851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.136861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.148852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.148862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.155428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.479 [2024-12-09 10:45:11.160857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.160868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.172860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.172883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.184852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.184861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.479 [2024-12-09 10:45:11.196540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.479 [2024-12-09 10:45:11.196857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.479 [2024-12-09 10:45:11.196870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.208877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.208892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.220862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.220880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.232860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.232875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.244855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.244869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.256859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.256873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.268853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.268864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.280870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.280890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.292861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.292879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.304883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.304899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.316854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.316864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.328852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.328862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.340854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.340865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.352856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.352870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.364854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.364868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.376851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.376861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.388851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.388861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.400852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.400862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.412855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.412869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.424851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.424861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.436850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.436860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.448857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.448873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.741 [2024-12-09 10:45:11.460853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.741 [2024-12-09 10:45:11.460863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.472850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.472860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.484851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.484861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.496852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.496864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.508857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.508874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 Running I/O for 5 seconds... 00:34:34.003 [2024-12-09 10:45:11.522627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.522648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.537161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.537179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.552341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.552361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.567185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.567205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.582184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.582208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.596658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.596676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.609245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.609263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.625515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.625533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.640966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.640992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.652337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.652356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.666438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.666457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.681154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.681172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.696594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.696614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.709580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.709599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.003 [2024-12-09 10:45:11.722717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.003 [2024-12-09 10:45:11.722737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.737403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.737424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.752680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.752700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.765925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.765944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.780891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.780911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.791798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.791826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.806587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.806607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.821271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.821292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.836640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.836659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.850638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.850659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.865403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.865422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.880987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.881006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.893532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.893551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.905931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.905954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.920589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.264 [2024-12-09 10:45:11.920608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.264 [2024-12-09 10:45:11.934546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.265 [2024-12-09 10:45:11.934567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.265 [2024-12-09 10:45:11.949185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.265 [2024-12-09 10:45:11.949205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.265 [2024-12-09 10:45:11.964791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.265 [2024-12-09 10:45:11.964815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.265 [2024-12-09 10:45:11.977355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.265 [2024-12-09 10:45:11.977374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:11.992996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:11.993015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.005694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.005713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.021149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.021167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.036447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.036466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.051053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.051073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.065900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.065919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.081210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.081228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.094460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.094480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.109215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.109233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.124776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.124796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.138519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.138538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.153561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.153580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.169000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.169020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.182849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.182873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.197909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.197928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.213196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.213215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.228080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.228099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.525 [2024-12-09 10:45:12.242416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.525 [2024-12-09 10:45:12.242435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.786 [2024-12-09 10:45:12.257550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.786 [2024-12-09 10:45:12.257568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.786 [2024-12-09 10:45:12.272446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.786 [2024-12-09 10:45:12.272465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.786 [2024-12-09 10:45:12.287026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.786 [2024-12-09 10:45:12.287044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.786 [2024-12-09 10:45:12.301382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.786 [2024-12-09 10:45:12.301401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.317217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.317236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.330789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.330814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.345444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.345464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.361012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.361031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.374865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.374884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.389496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.389515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.404730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.404749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.418403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.418422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.433516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.433535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.449111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.449130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.462710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.462737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.476807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.476830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.489877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.489896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.787 [2024-12-09 10:45:12.504499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.787 [2024-12-09 10:45:12.504518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 16806.00 IOPS, 131.30 MiB/s [2024-12-09T09:45:12.772Z] [2024-12-09 10:45:12.518565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.518583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.532775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.532795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.545356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.545374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.559132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.559151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.573800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.573825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.588741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.588760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.599683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.599703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.614515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.614533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.629239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.629257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.644373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.644392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.658891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.658911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.673299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.673318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.688705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.688723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.701293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.701311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.714830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.714848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.729238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.729256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.744584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.744602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.048 [2024-12-09 10:45:12.758888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.048 [2024-12-09 10:45:12.758907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.309 [2024-12-09 10:45:12.773613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.309 [2024-12-09 10:45:12.773632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.309 [2024-12-09 10:45:12.789175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.309 [2024-12-09 10:45:12.789194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.309 [2024-12-09 10:45:12.801300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.309 [2024-12-09 10:45:12.801318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.816632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.816651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.830325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.830344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.845227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.845245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.860980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.861001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.873738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.873758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.888706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.888725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.902896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.902914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.917578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.917598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.932574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.932594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.946675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.946694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.961391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.961410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.977174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.977193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:12.990796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:12.990820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:13.004757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:13.004775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:13.017409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:13.017427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.310 [2024-12-09 10:45:13.030814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.310 [2024-12-09 10:45:13.030833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.045616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.045634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.060228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.060246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.074889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.074908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.089441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.089459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.104757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.104776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.119313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.119331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.133788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.133806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.148062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.148081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.162945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.162964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.176932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.176951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.188079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.188099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.202568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.202589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.217247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.217266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.233247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.233266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.249189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.249208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.264874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.264899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.277609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.277628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.571 [2024-12-09 10:45:13.292872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.571 [2024-12-09 10:45:13.292891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.305055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.305074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.319014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.319034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.333866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.333885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.348998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.349017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.359636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.359655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.374734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.374753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.389610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.389630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.405020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.405041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.417033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.417053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.430600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.430619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.445476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.445496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.460522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.460541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.473840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.473860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.489463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.489484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.504404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.504425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.518483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.518504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 16839.00 IOPS, 131.55 MiB/s [2024-12-09T09:45:13.560Z] [2024-12-09 10:45:13.533588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.533614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.836 [2024-12-09 10:45:13.548070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.836 [2024-12-09 10:45:13.548092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.562559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.562580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.577549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.577568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.592717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.592736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.606772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.606792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.621316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.621334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.634127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.634151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.648884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.648903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.661566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.661584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.674638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.674657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.689067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.689085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.701584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.701602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.714772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.714791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.729535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.729552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.744592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.097 [2024-12-09 10:45:13.744611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.097 [2024-12-09 10:45:13.758318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.098 [2024-12-09 10:45:13.758336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.098 [2024-12-09 10:45:13.773308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.098 [2024-12-09 10:45:13.773325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.098 [2024-12-09 10:45:13.789556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.098 [2024-12-09 10:45:13.789575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.098 [2024-12-09 10:45:13.804716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.098 [2024-12-09 10:45:13.804739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.098 [2024-12-09 10:45:13.817648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.098 [2024-12-09 10:45:13.817667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.358 [2024-12-09 10:45:13.832857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.358 [2024-12-09 10:45:13.832876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.358 [2024-12-09 10:45:13.845051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.358 [2024-12-09 10:45:13.845070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.858710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.858729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.873327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.873345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.888972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.888990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.902023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.902042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.916679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.916699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.930735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.930753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.945623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.945641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.961049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.961070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.971428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.971446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:13.986473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:13.986492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.001000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.001019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.014034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.014053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.028802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.028825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.040964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.040983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.054935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.054954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.359 [2024-12-09 10:45:14.069376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.359 [2024-12-09 10:45:14.069394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.085573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.085592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.101034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.101053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.112143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.112162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.126697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.126716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.141739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.141757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.156713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.156732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.170503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.170521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.184983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.185003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.197870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.197888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.212621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.212640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.226065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.226082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.240875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.240894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.254453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.254473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.269104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.269123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.280435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.280453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.294313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.294331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.308739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.308757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.321312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.321330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.620 [2024-12-09 10:45:14.334408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.620 [2024-12-09 10:45:14.334427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.349294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.349313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.361866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.361886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.376672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.376691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.388812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.388831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.402684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.402703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.417320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.417338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.429221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.429239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.444356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.444375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.459205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.459224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.473578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.473597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.487794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.487819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.502410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.502430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.517107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.517126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 16852.00 IOPS, 131.66 MiB/s [2024-12-09T09:45:14.605Z] [2024-12-09 10:45:14.530092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.530110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.540992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.541011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.554643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.554663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.569871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.569889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.584444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.584463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.881 [2024-12-09 10:45:14.598340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.881 [2024-12-09 10:45:14.598359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.613036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.613056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.625798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.625823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.640993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.641013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.653446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.653466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.666552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.666571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.681235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.681255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.696711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.696731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.708400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.708418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.722545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.722564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.737010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.737031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.749640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.749660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.764826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.764845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.778399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.778418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.793243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.793261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.808876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.808895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.821566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.821584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.143 [2024-12-09 10:45:14.836969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.143 [2024-12-09 10:45:14.836989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.144 [2024-12-09 10:45:14.849732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.144 [2024-12-09 10:45:14.849756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.144 [2024-12-09 10:45:14.864402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.144 [2024-12-09 10:45:14.864422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.877087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.877106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.890832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.890852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.905830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.905849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.920581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.920600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.932617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.932636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.946842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.946861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.961641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.961660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.976781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.976802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:14.988966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:14.988985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:15.002939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.404 [2024-12-09 10:45:15.002959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.404 [2024-12-09 10:45:15.017487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.017506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.032937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.032956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.045545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.045565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.058442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.058461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.072899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.072918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.085005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.085024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.098789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.098813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.405 [2024-12-09 10:45:15.113785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.405 [2024-12-09 10:45:15.113814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.128815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.128833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.141843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.141861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.154033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.154051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.168990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.169009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.181872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.181890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.196727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.196745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.207705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.207723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.222285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.222303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.236663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.236682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.250178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.250197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.264633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.264652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.277679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.277698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.292325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.292345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.305864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.305883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.320720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.320739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.334241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.334264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.349129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.665 [2024-12-09 10:45:15.349147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.665 [2024-12-09 10:45:15.362589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.666 [2024-12-09 10:45:15.362608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.666 [2024-12-09 10:45:15.377026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.666 [2024-12-09 10:45:15.377049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.666 [2024-12-09 10:45:15.387528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.666 [2024-12-09 10:45:15.387546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.403106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.403126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.417598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.417617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.429366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.429385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.442724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.442742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.457871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.457890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.472547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.472565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.486607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.486627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.501272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.501290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.516495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.516513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 16861.00 IOPS, 131.73 MiB/s [2024-12-09T09:45:15.651Z] [2024-12-09 10:45:15.530316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.530335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.544519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.544538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.558801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.558826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.572904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.572922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.586182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.586200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.600947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.600966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.613524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.613543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.629395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.629413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.927 [2024-12-09 10:45:15.644648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.927 [2024-12-09 10:45:15.644667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.658848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.658867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.673517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.673535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.689316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.689335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.705285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.705303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.720872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.720891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.733521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.733539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.746348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.189 [2024-12-09 10:45:15.746366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.189 [2024-12-09 10:45:15.761029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.761048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.774323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.774341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.785013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.785031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.798788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.798806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.813039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.813057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.825425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.825444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.841163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.841182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.856414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.856432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.870777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.870795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.884800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.884825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.190 [2024-12-09 10:45:15.897459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.190 [2024-12-09 10:45:15.897478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.912467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.912486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.925999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.926018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.941065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.941085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.952411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.952430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.966846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.966865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.981467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.981486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:15.996999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:15.997017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:16.011342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:16.011360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:16.025638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:16.025656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.451 [2024-12-09 10:45:16.040457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.451 [2024-12-09 10:45:16.040476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.054340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.054360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.065373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.065392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.078912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.078948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.094173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.094193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.109156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.109176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.124682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.124702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.138215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.138234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.152848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.152868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.452 [2024-12-09 10:45:16.163902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.452 [2024-12-09 10:45:16.163920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.178744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.178763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.193301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.193320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.209042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.209061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.222339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.222359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.237005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.237024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.247993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.248012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.262943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.262963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.277620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.277640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.293072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.293091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.305629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.305649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.318513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.318533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.332896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.332916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.344037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.344056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.359093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.359113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.373326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.373345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.388701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.388720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.401611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.401630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.416617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.416638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.714 [2024-12-09 10:45:16.429739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.714 [2024-12-09 10:45:16.429762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.444655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.444675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.457328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.457347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.472666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.472685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.486234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.486252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.501092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.501113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.511995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.512014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.526900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.526919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 16871.40 IOPS, 131.81 MiB/s 00:34:38.977 Latency(us) 00:34:38.977 [2024-12-09T09:45:16.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.977 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:38.977 Nvme1n1 : 5.01 16874.11 131.83 0.00 0.00 7578.83 2153.33 12607.88 00:34:38.977 [2024-12-09T09:45:16.701Z] =================================================================================================================== 00:34:38.977 [2024-12-09T09:45:16.701Z] Total : 16874.11 131.83 0.00 0.00 7578.83 2153.33 12607.88 00:34:38.977 [2024-12-09 10:45:16.536860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.536877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.548857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.548874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.560869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.560884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.572864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.572883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.584858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.584872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.596863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.596875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.608858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.608871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.620855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.620869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.632853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.632872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.644853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.644865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.656850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.656860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.668858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.668870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.680853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.680863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.977 [2024-12-09 10:45:16.692852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.977 [2024-12-09 10:45:16.692863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2879551) - No such process 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2879551 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.240 delay0 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.240 10:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:39.240 [2024-12-09 10:45:16.842751] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:47.384 Initializing NVMe Controllers 00:34:47.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:47.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:47.384 Initialization complete. Launching workers. 00:34:47.384 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 21374 00:34:47.384 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21547, failed to submit 93 00:34:47.384 success 21454, unsuccessful 93, failed 0 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:47.384 rmmod nvme_tcp 00:34:47.384 rmmod nvme_fabrics 00:34:47.384 rmmod nvme_keyring 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2877197 ']' 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2877197 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2877197 ']' 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2877197 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2877197 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2877197' 00:34:47.384 killing process with pid 2877197 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2877197 00:34:47.384 10:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2877197 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.384 10:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.773 00:34:48.773 real 0m32.347s 00:34:48.773 user 0m41.868s 00:34:48.773 sys 0m13.037s 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:48.773 ************************************ 00:34:48.773 END TEST nvmf_zcopy 00:34:48.773 ************************************ 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:48.773 ************************************ 00:34:48.773 START TEST nvmf_nmic 00:34:48.773 ************************************ 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:48.773 * Looking for test storage... 00:34:48.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:48.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.773 --rc genhtml_branch_coverage=1 00:34:48.773 --rc genhtml_function_coverage=1 00:34:48.773 --rc genhtml_legend=1 00:34:48.773 --rc geninfo_all_blocks=1 00:34:48.773 --rc geninfo_unexecuted_blocks=1 00:34:48.773 00:34:48.773 ' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:48.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.773 --rc genhtml_branch_coverage=1 00:34:48.773 --rc genhtml_function_coverage=1 00:34:48.773 --rc genhtml_legend=1 00:34:48.773 --rc geninfo_all_blocks=1 00:34:48.773 --rc geninfo_unexecuted_blocks=1 00:34:48.773 00:34:48.773 ' 00:34:48.773 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:48.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.773 --rc genhtml_branch_coverage=1 00:34:48.773 --rc genhtml_function_coverage=1 00:34:48.773 --rc genhtml_legend=1 00:34:48.774 --rc geninfo_all_blocks=1 00:34:48.774 --rc geninfo_unexecuted_blocks=1 00:34:48.774 00:34:48.774 ' 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:48.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.774 --rc genhtml_branch_coverage=1 00:34:48.774 --rc genhtml_function_coverage=1 00:34:48.774 --rc genhtml_legend=1 00:34:48.774 --rc geninfo_all_blocks=1 00:34:48.774 --rc geninfo_unexecuted_blocks=1 00:34:48.774 00:34:48.774 ' 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.774 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.035 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.036 10:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.630 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:55.630 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:55.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:55.631 Found net devices under 0000:86:00.0: cvl_0_0 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:55.631 Found net devices under 0000:86:00.1: cvl_0_1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:34:55.631 00:34:55.631 --- 10.0.0.2 ping statistics --- 00:34:55.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.631 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:34:55.631 00:34:55.631 --- 10.0.0.1 ping statistics --- 00:34:55.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.631 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2885044 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2885044 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2885044 ']' 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.631 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.631 [2024-12-09 10:45:32.478910] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:55.631 [2024-12-09 10:45:32.479788] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:34:55.631 [2024-12-09 10:45:32.479829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.632 [2024-12-09 10:45:32.558608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.632 [2024-12-09 10:45:32.599411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.632 [2024-12-09 10:45:32.599453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.632 [2024-12-09 10:45:32.599460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.632 [2024-12-09 10:45:32.599467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.632 [2024-12-09 10:45:32.599471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.632 [2024-12-09 10:45:32.600860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.632 [2024-12-09 10:45:32.600971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.632 [2024-12-09 10:45:32.601078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.632 [2024-12-09 10:45:32.601079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.632 [2024-12-09 10:45:32.670043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:55.632 [2024-12-09 10:45:32.670284] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:55.632 [2024-12-09 10:45:32.670804] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:55.632 [2024-12-09 10:45:32.670923] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:55.632 [2024-12-09 10:45:32.671001] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 [2024-12-09 10:45:32.749763] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 Malloc0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 [2024-12-09 10:45:32.837909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:55.632 test case1: single bdev can't be used in multiple subsystems 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 [2024-12-09 10:45:32.869456] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:55.632 [2024-12-09 10:45:32.869478] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:55.632 [2024-12-09 10:45:32.869486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.632 request: 00:34:55.632 { 00:34:55.632 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:55.632 "namespace": { 00:34:55.632 "bdev_name": "Malloc0", 00:34:55.632 "no_auto_visible": false, 00:34:55.632 "hide_metadata": false 00:34:55.632 }, 00:34:55.632 "method": "nvmf_subsystem_add_ns", 00:34:55.632 "req_id": 1 00:34:55.632 } 00:34:55.632 Got JSON-RPC error response 00:34:55.632 response: 00:34:55.632 { 00:34:55.632 "code": -32602, 00:34:55.632 "message": "Invalid parameters" 00:34:55.632 } 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:55.632 Adding namespace failed - expected result. 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:55.632 test case2: host connect to nvmf target in multiple paths 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.632 [2024-12-09 10:45:32.881565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.632 10:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:55.632 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:55.893 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:55.893 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:55.893 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:55.893 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:55.893 10:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:57.806 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:57.807 10:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:57.807 [global] 00:34:57.807 thread=1 00:34:57.807 invalidate=1 00:34:57.807 rw=write 00:34:57.807 time_based=1 00:34:57.807 runtime=1 00:34:57.807 ioengine=libaio 00:34:57.807 direct=1 00:34:57.807 bs=4096 00:34:57.807 iodepth=1 00:34:57.807 norandommap=0 00:34:57.807 numjobs=1 00:34:57.807 00:34:57.807 verify_dump=1 00:34:57.807 verify_backlog=512 00:34:57.807 verify_state_save=0 00:34:57.807 do_verify=1 00:34:57.807 verify=crc32c-intel 00:34:57.807 [job0] 00:34:57.807 filename=/dev/nvme0n1 00:34:57.807 Could not set queue depth (nvme0n1) 00:34:58.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.067 fio-3.35 00:34:58.067 Starting 1 thread 00:34:59.451 00:34:59.451 job0: (groupid=0, jobs=1): err= 0: pid=2885766: Mon Dec 9 10:45:36 2024 00:34:59.451 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:34:59.451 slat (nsec): min=10154, max=24424, avg=22307.00, stdev=2785.06 00:34:59.451 clat (usec): min=40856, max=41986, avg=41024.19, stdev=227.40 00:34:59.451 lat (usec): min=40878, max=42009, avg=41046.50, stdev=226.64 00:34:59.451 clat percentiles (usec): 00:34:59.451 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:59.451 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:59.451 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:59.451 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:59.451 | 99.99th=[42206] 00:34:59.451 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:59.451 slat (nsec): min=9988, max=43929, avg=11199.88, stdev=2382.90 00:34:59.451 clat (usec): min=121, max=406, avg=135.57, stdev=15.89 00:34:59.451 lat (usec): min=131, max=417, avg=146.77, stdev=16.87 00:34:59.451 clat percentiles (usec): 00:34:59.451 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 130], 00:34:59.451 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:34:59.451 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 147], 00:34:59.451 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 408], 99.95th=[ 408], 00:34:59.451 | 99.99th=[ 408] 00:34:59.451 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:59.451 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:59.451 lat (usec) : 250=95.33%, 500=0.37% 00:34:59.451 lat (msec) : 50=4.30% 00:34:59.451 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:34:59.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.451 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.451 00:34:59.451 Run status group 0 (all jobs): 00:34:59.451 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=92.0KiB (94.2kB), run=1021-1021msec 00:34:59.451 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:34:59.451 00:34:59.451 Disk stats (read/write): 00:34:59.451 nvme0n1: ios=69/512, merge=0/0, ticks=840/61, in_queue=901, util=91.18% 00:34:59.452 10:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:59.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:59.452 rmmod nvme_tcp 00:34:59.452 rmmod nvme_fabrics 00:34:59.452 rmmod nvme_keyring 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2885044 ']' 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2885044 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2885044 ']' 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2885044 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.452 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2885044 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2885044' 00:34:59.712 killing process with pid 2885044 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2885044 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2885044 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.712 10:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:02.259 00:35:02.259 real 0m13.132s 00:35:02.259 user 0m24.420s 00:35:02.259 sys 0m6.024s 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:02.259 ************************************ 00:35:02.259 END TEST nvmf_nmic 00:35:02.259 ************************************ 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:02.259 ************************************ 00:35:02.259 START TEST nvmf_fio_target 00:35:02.259 ************************************ 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:02.259 * Looking for test storage... 00:35:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.259 --rc genhtml_branch_coverage=1 00:35:02.259 --rc genhtml_function_coverage=1 00:35:02.259 --rc genhtml_legend=1 00:35:02.259 --rc geninfo_all_blocks=1 00:35:02.259 --rc geninfo_unexecuted_blocks=1 00:35:02.259 00:35:02.259 ' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.259 --rc genhtml_branch_coverage=1 00:35:02.259 --rc genhtml_function_coverage=1 00:35:02.259 --rc genhtml_legend=1 00:35:02.259 --rc geninfo_all_blocks=1 00:35:02.259 --rc geninfo_unexecuted_blocks=1 00:35:02.259 00:35:02.259 ' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.259 --rc genhtml_branch_coverage=1 00:35:02.259 --rc genhtml_function_coverage=1 00:35:02.259 --rc genhtml_legend=1 00:35:02.259 --rc geninfo_all_blocks=1 00:35:02.259 --rc geninfo_unexecuted_blocks=1 00:35:02.259 00:35:02.259 ' 00:35:02.259 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.259 --rc genhtml_branch_coverage=1 00:35:02.259 --rc genhtml_function_coverage=1 00:35:02.259 --rc genhtml_legend=1 00:35:02.259 --rc geninfo_all_blocks=1 00:35:02.260 --rc geninfo_unexecuted_blocks=1 00:35:02.260 00:35:02.260 ' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.260 10:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:07.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:07.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:07.654 Found net devices under 0000:86:00.0: cvl_0_0 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:07.654 Found net devices under 0000:86:00.1: cvl_0_1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.654 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.914 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.914 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.914 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.914 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:35:07.914 00:35:07.914 --- 10.0.0.2 ping statistics --- 00:35:07.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.915 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:35:07.915 00:35:07.915 --- 10.0.0.1 ping statistics --- 00:35:07.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.915 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2889426 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2889426 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2889426 ']' 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.915 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.915 [2024-12-09 10:45:45.541735] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:07.915 [2024-12-09 10:45:45.542659] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:35:07.915 [2024-12-09 10:45:45.542692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.915 [2024-12-09 10:45:45.623215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:08.191 [2024-12-09 10:45:45.666538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.192 [2024-12-09 10:45:45.666572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.192 [2024-12-09 10:45:45.666579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.192 [2024-12-09 10:45:45.666584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.192 [2024-12-09 10:45:45.666589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.192 [2024-12-09 10:45:45.668165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.192 [2024-12-09 10:45:45.668272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.192 [2024-12-09 10:45:45.668382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.192 [2024-12-09 10:45:45.668382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:08.192 [2024-12-09 10:45:45.737482] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:08.192 [2024-12-09 10:45:45.737769] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:08.192 [2024-12-09 10:45:45.738249] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:08.192 [2024-12-09 10:45:45.738420] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:08.192 [2024-12-09 10:45:45.738473] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.192 10:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:08.455 [2024-12-09 10:45:45.969065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.455 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:08.715 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:08.715 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:08.975 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:08.975 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:08.975 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:08.975 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:09.235 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:09.235 10:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:09.495 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:09.755 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:09.755 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:09.755 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:09.755 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:10.016 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:10.016 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:10.277 10:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:10.537 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:10.537 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:10.799 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:10.799 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:10.799 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.058 [2024-12-09 10:45:48.644969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.058 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:11.318 10:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:11.577 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:11.577 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:11.578 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:11.578 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:11.578 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:11.578 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:11.578 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:14.116 10:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:14.116 [global] 00:35:14.116 thread=1 00:35:14.116 invalidate=1 00:35:14.116 rw=write 00:35:14.116 time_based=1 00:35:14.116 runtime=1 00:35:14.116 ioengine=libaio 00:35:14.116 direct=1 00:35:14.116 bs=4096 00:35:14.116 iodepth=1 00:35:14.116 norandommap=0 00:35:14.116 numjobs=1 00:35:14.116 00:35:14.116 verify_dump=1 00:35:14.116 verify_backlog=512 00:35:14.116 verify_state_save=0 00:35:14.116 do_verify=1 00:35:14.116 verify=crc32c-intel 00:35:14.116 [job0] 00:35:14.116 filename=/dev/nvme0n1 00:35:14.116 [job1] 00:35:14.116 filename=/dev/nvme0n2 00:35:14.116 [job2] 00:35:14.116 filename=/dev/nvme0n3 00:35:14.116 [job3] 00:35:14.116 filename=/dev/nvme0n4 00:35:14.116 Could not set queue depth (nvme0n1) 00:35:14.116 Could not set queue depth (nvme0n2) 00:35:14.116 Could not set queue depth (nvme0n3) 00:35:14.116 Could not set queue depth (nvme0n4) 00:35:14.116 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:14.116 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:14.116 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:14.116 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:14.116 fio-3.35 00:35:14.116 Starting 4 threads 00:35:15.495 00:35:15.495 job0: (groupid=0, jobs=1): err= 0: pid=2890695: Mon Dec 9 10:45:52 2024 00:35:15.495 read: IOPS=2374, BW=9499KiB/s (9726kB/s)(9508KiB/1001msec) 00:35:15.495 slat (nsec): min=7188, max=26224, avg=8235.41, stdev=1191.63 00:35:15.495 clat (usec): min=188, max=463, avg=223.79, stdev=24.15 00:35:15.495 lat (usec): min=195, max=489, avg=232.03, stdev=24.23 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:35:15.495 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:35:15.495 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 273], 95.00th=[ 277], 00:35:15.495 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 375], 00:35:15.495 | 99.99th=[ 465] 00:35:15.495 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:35:15.495 slat (nsec): min=10546, max=42866, avg=12056.60, stdev=2136.71 00:35:15.495 clat (usec): min=132, max=379, avg=157.35, stdev=14.20 00:35:15.495 lat (usec): min=143, max=417, avg=169.41, stdev=15.03 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:35:15.495 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:35:15.495 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 184], 00:35:15.495 | 99.00th=[ 202], 99.50th=[ 223], 99.90th=[ 241], 99.95th=[ 260], 00:35:15.495 | 99.99th=[ 379] 00:35:15.495 bw ( KiB/s): min=11576, max=11576, per=47.20%, avg=11576.00, stdev= 0.00, samples=1 00:35:15.495 iops : min= 2894, max= 2894, avg=2894.00, stdev= 0.00, samples=1 00:35:15.495 lat (usec) : 250=92.12%, 500=7.88% 00:35:15.495 cpu : usr=4.30%, sys=7.80%, ctx=4938, majf=0, minf=1 00:35:15.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 issued rwts: total=2377,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.495 job1: (groupid=0, jobs=1): err= 0: pid=2890699: Mon Dec 9 10:45:52 2024 00:35:15.495 read: IOPS=2102, BW=8412KiB/s (8613kB/s)(8420KiB/1001msec) 00:35:15.495 slat (nsec): min=6799, max=24075, avg=7923.08, stdev=1244.32 00:35:15.495 clat (usec): min=191, max=425, avg=248.79, stdev=42.68 00:35:15.495 lat (usec): min=200, max=433, avg=256.72, stdev=42.79 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:35:15.495 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 243], 60.00th=[ 255], 00:35:15.495 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 302], 95.00th=[ 310], 00:35:15.495 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 424], 99.95th=[ 424], 00:35:15.495 | 99.99th=[ 424] 00:35:15.495 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:35:15.495 slat (nsec): min=9888, max=43033, avg=11345.89, stdev=2174.74 00:35:15.495 clat (usec): min=132, max=318, avg=162.94, stdev=17.27 00:35:15.495 lat (usec): min=143, max=359, avg=174.28, stdev=17.99 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:35:15.495 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:35:15.495 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 198], 00:35:15.495 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 281], 99.95th=[ 293], 00:35:15.495 | 99.99th=[ 318] 00:35:15.495 bw ( KiB/s): min= 8456, max= 8456, per=34.48%, avg=8456.00, stdev= 0.00, samples=1 00:35:15.495 iops : min= 2114, max= 2114, avg=2114.00, stdev= 0.00, samples=1 00:35:15.495 lat (usec) : 250=80.77%, 500=19.23% 00:35:15.495 cpu : usr=4.30%, sys=6.90%, ctx=4665, majf=0, minf=1 00:35:15.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 issued rwts: total=2105,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.495 job2: (groupid=0, jobs=1): err= 0: pid=2890700: Mon Dec 9 10:45:52 2024 00:35:15.495 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:35:15.495 slat (nsec): min=10029, max=24605, avg=23109.77, stdev=2940.29 00:35:15.495 clat (usec): min=40862, max=42056, avg=41010.90, stdev=241.58 00:35:15.495 lat (usec): min=40872, max=42080, avg=41034.01, stdev=242.06 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:15.495 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:15.495 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:15.495 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:15.495 | 99.99th=[42206] 00:35:15.495 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:35:15.495 slat (nsec): min=9922, max=48094, avg=11194.91, stdev=2534.01 00:35:15.495 clat (usec): min=147, max=405, avg=180.36, stdev=18.13 00:35:15.495 lat (usec): min=164, max=451, avg=191.55, stdev=19.06 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:35:15.495 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:35:15.495 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:35:15.495 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 408], 99.95th=[ 408], 00:35:15.495 | 99.99th=[ 408] 00:35:15.495 bw ( KiB/s): min= 4096, max= 4096, per=16.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:15.495 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:15.495 lat (usec) : 250=94.94%, 500=0.94% 00:35:15.495 lat (msec) : 50=4.12% 00:35:15.495 cpu : usr=0.20%, sys=0.70%, ctx=535, majf=0, minf=1 00:35:15.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.495 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.495 job3: (groupid=0, jobs=1): err= 0: pid=2890701: Mon Dec 9 10:45:52 2024 00:35:15.495 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:35:15.495 slat (nsec): min=9891, max=24589, avg=23152.59, stdev=2982.26 00:35:15.495 clat (usec): min=40867, max=42083, avg=41145.45, stdev=422.46 00:35:15.495 lat (usec): min=40891, max=42107, avg=41168.60, stdev=421.07 00:35:15.495 clat percentiles (usec): 00:35:15.495 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:15.495 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:15.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:15.496 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:15.496 | 99.99th=[42206] 00:35:15.496 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:35:15.496 slat (nsec): min=10090, max=39747, avg=11290.50, stdev=1696.31 00:35:15.496 clat (usec): min=145, max=283, avg=173.95, stdev=13.35 00:35:15.496 lat (usec): min=156, max=323, avg=185.24, stdev=13.86 00:35:15.496 clat percentiles (usec): 00:35:15.496 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:35:15.496 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:35:15.496 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:35:15.496 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 285], 99.95th=[ 285], 00:35:15.496 | 99.99th=[ 285] 00:35:15.496 bw ( KiB/s): min= 4096, max= 4096, per=16.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:15.496 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:15.496 lat (usec) : 250=95.69%, 500=0.19% 00:35:15.496 lat (msec) : 50=4.12% 00:35:15.496 cpu : usr=0.50%, sys=0.40%, ctx=534, majf=0, minf=1 00:35:15.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.496 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.496 00:35:15.496 Run status group 0 (all jobs): 00:35:15.496 READ: bw=17.6MiB/s (18.5MB/s), 87.8KiB/s-9499KiB/s (89.9kB/s-9726kB/s), io=17.7MiB (18.5MB), run=1001-1002msec 00:35:15.496 WRITE: bw=24.0MiB/s (25.1MB/s), 2044KiB/s-9.99MiB/s (2093kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1002msec 00:35:15.496 00:35:15.496 Disk stats (read/write): 00:35:15.496 nvme0n1: ios=2071/2132, merge=0/0, ticks=1309/325, in_queue=1634, util=85.87% 00:35:15.496 nvme0n2: ios=1895/2048, merge=0/0, ticks=488/319, in_queue=807, util=90.75% 00:35:15.496 nvme0n3: ios=41/512, merge=0/0, ticks=1646/89, in_queue=1735, util=93.54% 00:35:15.496 nvme0n4: ios=75/512, merge=0/0, ticks=818/87, in_queue=905, util=95.48% 00:35:15.496 10:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:15.496 [global] 00:35:15.496 thread=1 00:35:15.496 invalidate=1 00:35:15.496 rw=randwrite 00:35:15.496 time_based=1 00:35:15.496 runtime=1 00:35:15.496 ioengine=libaio 00:35:15.496 direct=1 00:35:15.496 bs=4096 00:35:15.496 iodepth=1 00:35:15.496 norandommap=0 00:35:15.496 numjobs=1 00:35:15.496 00:35:15.496 verify_dump=1 00:35:15.496 verify_backlog=512 00:35:15.496 verify_state_save=0 00:35:15.496 do_verify=1 00:35:15.496 verify=crc32c-intel 00:35:15.496 [job0] 00:35:15.496 filename=/dev/nvme0n1 00:35:15.496 [job1] 00:35:15.496 filename=/dev/nvme0n2 00:35:15.496 [job2] 00:35:15.496 filename=/dev/nvme0n3 00:35:15.496 [job3] 00:35:15.496 filename=/dev/nvme0n4 00:35:15.496 Could not set queue depth (nvme0n1) 00:35:15.496 Could not set queue depth (nvme0n2) 00:35:15.496 Could not set queue depth (nvme0n3) 00:35:15.496 Could not set queue depth (nvme0n4) 00:35:15.496 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.496 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.496 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.496 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.496 fio-3.35 00:35:15.496 Starting 4 threads 00:35:16.876 00:35:16.876 job0: (groupid=0, jobs=1): err= 0: pid=2891070: Mon Dec 9 10:45:54 2024 00:35:16.876 read: IOPS=2127, BW=8511KiB/s (8716kB/s)(8520KiB/1001msec) 00:35:16.876 slat (nsec): min=6365, max=20765, avg=7434.41, stdev=968.74 00:35:16.876 clat (usec): min=177, max=495, avg=239.95, stdev=19.77 00:35:16.876 lat (usec): min=183, max=503, avg=247.38, stdev=19.88 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 192], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 229], 00:35:16.876 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:35:16.876 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 262], 00:35:16.876 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 437], 99.95th=[ 461], 00:35:16.876 | 99.99th=[ 498] 00:35:16.876 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:35:16.876 slat (nsec): min=9049, max=39657, avg=10073.26, stdev=1142.72 00:35:16.876 clat (usec): min=124, max=570, avg=170.90, stdev=32.62 00:35:16.876 lat (usec): min=134, max=580, avg=180.98, stdev=32.73 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 151], 00:35:16.876 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:35:16.876 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 223], 95.00th=[ 233], 00:35:16.876 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 441], 99.95th=[ 482], 00:35:16.876 | 99.99th=[ 570] 00:35:16.876 bw ( KiB/s): min=10530, max=10530, per=44.43%, avg=10530.00, stdev= 0.00, samples=1 00:35:16.876 iops : min= 2632, max= 2632, avg=2632.00, stdev= 0.00, samples=1 00:35:16.876 lat (usec) : 250=89.64%, 500=10.34%, 750=0.02% 00:35:16.876 cpu : usr=2.20%, sys=4.50%, ctx=4690, majf=0, minf=1 00:35:16.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 issued rwts: total=2130,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.876 job1: (groupid=0, jobs=1): err= 0: pid=2891074: Mon Dec 9 10:45:54 2024 00:35:16.876 read: IOPS=1504, BW=6019KiB/s (6164kB/s)(6188KiB/1028msec) 00:35:16.876 slat (nsec): min=6556, max=23878, avg=7698.30, stdev=1289.73 00:35:16.876 clat (usec): min=173, max=41257, avg=407.81, stdev=2734.20 00:35:16.876 lat (usec): min=181, max=41265, avg=415.51, stdev=2734.20 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:35:16.876 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:35:16.876 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 281], 00:35:16.876 | 99.00th=[ 330], 99.50th=[ 424], 99.90th=[41157], 99.95th=[41157], 00:35:16.876 | 99.99th=[41157] 00:35:16.876 write: IOPS=1992, BW=7969KiB/s (8160kB/s)(8192KiB/1028msec); 0 zone resets 00:35:16.876 slat (nsec): min=9277, max=37639, avg=10537.32, stdev=1715.76 00:35:16.876 clat (usec): min=120, max=528, avg=172.76, stdev=46.73 00:35:16.876 lat (usec): min=130, max=538, avg=183.30, stdev=46.66 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:35:16.876 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:35:16.876 | 70.00th=[ 167], 80.00th=[ 212], 90.00th=[ 247], 95.00th=[ 265], 00:35:16.876 | 99.00th=[ 310], 99.50th=[ 367], 99.90th=[ 494], 99.95th=[ 510], 00:35:16.876 | 99.99th=[ 529] 00:35:16.876 bw ( KiB/s): min= 4255, max=12120, per=34.55%, avg=8187.50, stdev=5561.39, samples=2 00:35:16.876 iops : min= 1063, max= 3030, avg=2046.50, stdev=1390.88, samples=2 00:35:16.876 lat (usec) : 250=88.68%, 500=11.07%, 750=0.06% 00:35:16.876 lat (msec) : 50=0.19% 00:35:16.876 cpu : usr=2.04%, sys=4.87%, ctx=3595, majf=0, minf=1 00:35:16.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 issued rwts: total=1547,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.876 job2: (groupid=0, jobs=1): err= 0: pid=2891075: Mon Dec 9 10:45:54 2024 00:35:16.876 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:35:16.876 slat (nsec): min=8333, max=20458, avg=10742.30, stdev=2714.24 00:35:16.876 clat (usec): min=338, max=42395, avg=39843.57, stdev=8626.02 00:35:16.876 lat (usec): min=347, max=42404, avg=39854.31, stdev=8626.22 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 338], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:16.876 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:16.876 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:16.876 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:16.876 | 99.99th=[42206] 00:35:16.876 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:35:16.876 slat (nsec): min=10203, max=36059, avg=11947.24, stdev=1956.15 00:35:16.876 clat (usec): min=152, max=314, avg=219.20, stdev=23.49 00:35:16.876 lat (usec): min=163, max=350, avg=231.15, stdev=23.91 00:35:16.876 clat percentiles (usec): 00:35:16.876 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 204], 00:35:16.876 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:35:16.876 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 255], 00:35:16.876 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 314], 00:35:16.876 | 99.99th=[ 314] 00:35:16.876 bw ( KiB/s): min= 4087, max= 4087, per=17.25%, avg=4087.00, stdev= 0.00, samples=1 00:35:16.876 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:16.876 lat (usec) : 250=89.72%, 500=6.17% 00:35:16.876 lat (msec) : 50=4.11% 00:35:16.876 cpu : usr=0.29%, sys=0.97%, ctx=535, majf=0, minf=1 00:35:16.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.876 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.876 job3: (groupid=0, jobs=1): err= 0: pid=2891076: Mon Dec 9 10:45:54 2024 00:35:16.877 read: IOPS=683, BW=2734KiB/s (2800kB/s)(2808KiB/1027msec) 00:35:16.877 slat (nsec): min=7423, max=36935, avg=8800.29, stdev=2660.03 00:35:16.877 clat (usec): min=199, max=42233, avg=1169.26, stdev=6172.22 00:35:16.877 lat (usec): min=207, max=42241, avg=1178.06, stdev=6172.76 00:35:16.877 clat percentiles (usec): 00:35:16.877 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:35:16.877 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:35:16.877 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 253], 00:35:16.877 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:16.877 | 99.99th=[42206] 00:35:16.877 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:35:16.877 slat (nsec): min=10602, max=41322, avg=11992.14, stdev=1971.73 00:35:16.877 clat (usec): min=139, max=327, avg=177.54, stdev=25.80 00:35:16.877 lat (usec): min=149, max=364, avg=189.53, stdev=26.24 00:35:16.877 clat percentiles (usec): 00:35:16.877 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:35:16.877 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 178], 00:35:16.877 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 227], 95.00th=[ 237], 00:35:16.877 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 326], 00:35:16.877 | 99.99th=[ 326] 00:35:16.877 bw ( KiB/s): min= 8175, max= 8175, per=34.49%, avg=8175.00, stdev= 0.00, samples=1 00:35:16.877 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:35:16.877 lat (usec) : 250=95.89%, 500=3.19% 00:35:16.877 lat (msec) : 50=0.93% 00:35:16.877 cpu : usr=2.14%, sys=2.05%, ctx=1727, majf=0, minf=1 00:35:16.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.877 issued rwts: total=702,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.877 00:35:16.877 Run status group 0 (all jobs): 00:35:16.877 READ: bw=16.6MiB/s (17.4MB/s), 88.7KiB/s-8511KiB/s (90.8kB/s-8716kB/s), io=17.2MiB (18.0MB), run=1001-1037msec 00:35:16.877 WRITE: bw=23.1MiB/s (24.3MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1037msec 00:35:16.877 00:35:16.877 Disk stats (read/write): 00:35:16.877 nvme0n1: ios=1951/2048, merge=0/0, ticks=462/346, in_queue=808, util=87.47% 00:35:16.877 nvme0n2: ios=1381/1536, merge=0/0, ticks=788/259, in_queue=1047, util=95.13% 00:35:16.877 nvme0n3: ios=75/512, merge=0/0, ticks=758/108, in_queue=866, util=94.69% 00:35:16.877 nvme0n4: ios=721/1024, merge=0/0, ticks=1249/174, in_queue=1423, util=96.54% 00:35:16.877 10:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:16.877 [global] 00:35:16.877 thread=1 00:35:16.877 invalidate=1 00:35:16.877 rw=write 00:35:16.877 time_based=1 00:35:16.877 runtime=1 00:35:16.877 ioengine=libaio 00:35:16.877 direct=1 00:35:16.877 bs=4096 00:35:16.877 iodepth=128 00:35:16.877 norandommap=0 00:35:16.877 numjobs=1 00:35:16.877 00:35:16.877 verify_dump=1 00:35:16.877 verify_backlog=512 00:35:16.877 verify_state_save=0 00:35:16.877 do_verify=1 00:35:16.877 verify=crc32c-intel 00:35:16.877 [job0] 00:35:16.877 filename=/dev/nvme0n1 00:35:16.877 [job1] 00:35:16.877 filename=/dev/nvme0n2 00:35:16.877 [job2] 00:35:16.877 filename=/dev/nvme0n3 00:35:16.877 [job3] 00:35:16.877 filename=/dev/nvme0n4 00:35:16.877 Could not set queue depth (nvme0n1) 00:35:16.877 Could not set queue depth (nvme0n2) 00:35:16.877 Could not set queue depth (nvme0n3) 00:35:16.877 Could not set queue depth (nvme0n4) 00:35:17.137 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:17.137 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:17.137 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:17.137 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:17.137 fio-3.35 00:35:17.137 Starting 4 threads 00:35:18.518 00:35:18.518 job0: (groupid=0, jobs=1): err= 0: pid=2891443: Mon Dec 9 10:45:56 2024 00:35:18.518 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:35:18.518 slat (nsec): min=1180, max=10759k, avg=71457.08, stdev=596182.41 00:35:18.518 clat (usec): min=811, max=43406, avg=11642.76, stdev=4312.81 00:35:18.518 lat (usec): min=818, max=43408, avg=11714.22, stdev=4351.15 00:35:18.518 clat percentiles (usec): 00:35:18.518 | 1.00th=[ 3654], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 8848], 00:35:18.518 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10945], 60.00th=[11994], 00:35:18.518 | 70.00th=[12649], 80.00th=[14222], 90.00th=[15664], 95.00th=[20055], 00:35:18.518 | 99.00th=[26870], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:35:18.519 | 99.99th=[43254] 00:35:18.519 write: IOPS=5047, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:35:18.519 slat (nsec): min=1981, max=13797k, avg=93379.71, stdev=659147.63 00:35:18.519 clat (usec): min=543, max=98948, avg=14192.70, stdev=13514.26 00:35:18.519 lat (usec): min=553, max=98952, avg=14286.08, stdev=13586.68 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 2180], 5.00th=[ 5211], 10.00th=[ 6390], 20.00th=[ 7832], 00:35:18.519 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:35:18.519 | 70.00th=[11731], 80.00th=[14353], 90.00th=[21627], 95.00th=[41157], 00:35:18.519 | 99.00th=[81265], 99.50th=[89654], 99.90th=[98042], 99.95th=[98042], 00:35:18.519 | 99.99th=[99091] 00:35:18.519 bw ( KiB/s): min=18784, max=20664, per=25.99%, avg=19724.00, stdev=1329.36, samples=2 00:35:18.519 iops : min= 4696, max= 5166, avg=4931.00, stdev=332.34, samples=2 00:35:18.519 lat (usec) : 750=0.02%, 1000=0.10% 00:35:18.519 lat (msec) : 2=0.30%, 4=2.06%, 10=33.69%, 20=53.70%, 50=8.05% 00:35:18.519 lat (msec) : 100=2.08% 00:35:18.519 cpu : usr=3.70%, sys=5.29%, ctx=390, majf=0, minf=1 00:35:18.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:18.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:18.519 issued rwts: total=4608,5058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:18.519 job1: (groupid=0, jobs=1): err= 0: pid=2891444: Mon Dec 9 10:45:56 2024 00:35:18.519 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:35:18.519 slat (nsec): min=1634, max=11562k, avg=80906.06, stdev=598819.08 00:35:18.519 clat (usec): min=1053, max=24574, avg=10999.06, stdev=2854.71 00:35:18.519 lat (usec): min=1106, max=24598, avg=11079.96, stdev=2883.26 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 4817], 5.00th=[ 6783], 10.00th=[ 8094], 20.00th=[ 8979], 00:35:18.519 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11338], 00:35:18.519 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14353], 95.00th=[15926], 00:35:18.519 | 99.00th=[19792], 99.50th=[21365], 99.90th=[22938], 99.95th=[22938], 00:35:18.519 | 99.99th=[24511] 00:35:18.519 write: IOPS=5611, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:35:18.519 slat (usec): min=2, max=40969, avg=87.38, stdev=790.14 00:35:18.519 clat (usec): min=495, max=53580, avg=11305.92, stdev=6097.34 00:35:18.519 lat (usec): min=1161, max=62727, avg=11393.30, stdev=6155.11 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 1860], 5.00th=[ 5014], 10.00th=[ 6456], 20.00th=[ 8094], 00:35:18.519 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10552], 60.00th=[11469], 00:35:18.519 | 70.00th=[11731], 80.00th=[12256], 90.00th=[15008], 95.00th=[19792], 00:35:18.519 | 99.00th=[46400], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:35:18.519 | 99.99th=[53740] 00:35:18.519 bw ( KiB/s): min=20560, max=23408, per=28.97%, avg=21984.00, stdev=2013.84, samples=2 00:35:18.519 iops : min= 5140, max= 5852, avg=5496.00, stdev=503.46, samples=2 00:35:18.519 lat (usec) : 500=0.01% 00:35:18.519 lat (msec) : 2=0.54%, 4=1.14%, 10=37.16%, 20=58.23%, 50=2.55% 00:35:18.519 lat (msec) : 100=0.37% 00:35:18.519 cpu : usr=4.70%, sys=7.19%, ctx=391, majf=0, minf=1 00:35:18.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:18.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:18.519 issued rwts: total=5120,5623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:18.519 job2: (groupid=0, jobs=1): err= 0: pid=2891445: Mon Dec 9 10:45:56 2024 00:35:18.519 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:35:18.519 slat (nsec): min=1171, max=8881.3k, avg=100672.28, stdev=681083.84 00:35:18.519 clat (usec): min=793, max=26004, avg=12918.25, stdev=3979.61 00:35:18.519 lat (usec): min=802, max=26012, avg=13018.92, stdev=4028.41 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 1582], 5.00th=[ 4359], 10.00th=[ 8160], 20.00th=[10290], 00:35:18.519 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:35:18.519 | 70.00th=[13960], 80.00th=[15795], 90.00th=[17695], 95.00th=[19530], 00:35:18.519 | 99.00th=[24249], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:35:18.519 | 99.99th=[26084] 00:35:18.519 write: IOPS=4226, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec); 0 zone resets 00:35:18.519 slat (usec): min=2, max=14976, avg=117.78, stdev=705.23 00:35:18.519 clat (usec): min=265, max=84749, avg=17528.20, stdev=15433.43 00:35:18.519 lat (usec): min=313, max=84757, avg=17645.98, stdev=15518.70 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 1090], 5.00th=[ 2442], 10.00th=[ 3523], 20.00th=[ 6521], 00:35:18.519 | 30.00th=[ 8848], 40.00th=[12387], 50.00th=[13435], 60.00th=[13698], 00:35:18.519 | 70.00th=[17957], 80.00th=[24511], 90.00th=[38011], 95.00th=[51119], 00:35:18.519 | 99.00th=[76022], 99.50th=[77071], 99.90th=[84411], 99.95th=[84411], 00:35:18.519 | 99.99th=[84411] 00:35:18.519 bw ( KiB/s): min=18800, max=18800, per=24.77%, avg=18800.00, stdev= 0.00, samples=1 00:35:18.519 iops : min= 4700, max= 4700, avg=4700.00, stdev= 0.00, samples=1 00:35:18.519 lat (usec) : 500=0.07%, 750=0.04%, 1000=0.56% 00:35:18.519 lat (msec) : 2=2.16%, 4=4.56%, 10=18.77%, 20=57.81%, 50=13.23% 00:35:18.519 lat (msec) : 100=2.80% 00:35:18.519 cpu : usr=3.70%, sys=4.90%, ctx=414, majf=0, minf=1 00:35:18.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:18.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:18.519 issued rwts: total=4096,4235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:18.519 job3: (groupid=0, jobs=1): err= 0: pid=2891446: Mon Dec 9 10:45:56 2024 00:35:18.519 read: IOPS=3683, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1002msec) 00:35:18.519 slat (nsec): min=1851, max=14870k, avg=117704.02, stdev=750125.66 00:35:18.519 clat (usec): min=590, max=43676, avg=14661.72, stdev=4299.02 00:35:18.519 lat (usec): min=4753, max=43683, avg=14779.42, stdev=4369.92 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 5211], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:35:18.519 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[14746], 00:35:18.519 | 70.00th=[16188], 80.00th=[17171], 90.00th=[20579], 95.00th=[21890], 00:35:18.519 | 99.00th=[31327], 99.50th=[31851], 99.90th=[38536], 99.95th=[43779], 00:35:18.519 | 99.99th=[43779] 00:35:18.519 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:35:18.519 slat (usec): min=2, max=56817, avg=127.45, stdev=1191.35 00:35:18.519 clat (usec): min=604, max=80917, avg=15525.42, stdev=11760.40 00:35:18.519 lat (usec): min=612, max=80939, avg=15652.87, stdev=11861.03 00:35:18.519 clat percentiles (usec): 00:35:18.519 | 1.00th=[ 2409], 5.00th=[ 5473], 10.00th=[ 8291], 20.00th=[10421], 00:35:18.519 | 30.00th=[11207], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:35:18.519 | 70.00th=[13435], 80.00th=[16188], 90.00th=[23987], 95.00th=[35390], 00:35:18.519 | 99.00th=[77071], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:35:18.519 | 99.99th=[81265] 00:35:18.519 bw ( KiB/s): min=13336, max=19432, per=21.59%, avg=16384.00, stdev=4310.52, samples=2 00:35:18.519 iops : min= 3334, max= 4858, avg=4096.00, stdev=1077.63, samples=2 00:35:18.519 lat (usec) : 750=0.09% 00:35:18.519 lat (msec) : 2=0.33%, 4=0.91%, 10=9.44%, 20=76.83%, 50=10.77% 00:35:18.519 lat (msec) : 100=1.62% 00:35:18.519 cpu : usr=3.20%, sys=5.09%, ctx=308, majf=0, minf=1 00:35:18.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:18.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:18.519 issued rwts: total=3691,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:18.519 00:35:18.519 Run status group 0 (all jobs): 00:35:18.519 READ: bw=68.3MiB/s (71.6MB/s), 14.4MiB/s-20.0MiB/s (15.1MB/s-20.9MB/s), io=68.4MiB (71.7MB), run=1002-1002msec 00:35:18.519 WRITE: bw=74.1MiB/s (77.7MB/s), 16.0MiB/s-21.9MiB/s (16.7MB/s-23.0MB/s), io=74.3MiB (77.9MB), run=1002-1002msec 00:35:18.519 00:35:18.519 Disk stats (read/write): 00:35:18.519 nvme0n1: ios=3604/3912, merge=0/0, ticks=38315/55933, in_queue=94248, util=87.37% 00:35:18.519 nvme0n2: ios=4149/4541, merge=0/0, ticks=28307/33360, in_queue=61667, util=90.43% 00:35:18.519 nvme0n3: ios=3129/3423, merge=0/0, ticks=27875/51731, in_queue=79606, util=93.70% 00:35:18.519 nvme0n4: ios=2833/3072, merge=0/0, ticks=26017/24195, in_queue=50212, util=99.56% 00:35:18.519 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:18.519 [global] 00:35:18.519 thread=1 00:35:18.519 invalidate=1 00:35:18.519 rw=randwrite 00:35:18.519 time_based=1 00:35:18.519 runtime=1 00:35:18.519 ioengine=libaio 00:35:18.519 direct=1 00:35:18.519 bs=4096 00:35:18.519 iodepth=128 00:35:18.519 norandommap=0 00:35:18.519 numjobs=1 00:35:18.519 00:35:18.519 verify_dump=1 00:35:18.519 verify_backlog=512 00:35:18.519 verify_state_save=0 00:35:18.519 do_verify=1 00:35:18.519 verify=crc32c-intel 00:35:18.519 [job0] 00:35:18.519 filename=/dev/nvme0n1 00:35:18.519 [job1] 00:35:18.519 filename=/dev/nvme0n2 00:35:18.519 [job2] 00:35:18.519 filename=/dev/nvme0n3 00:35:18.519 [job3] 00:35:18.519 filename=/dev/nvme0n4 00:35:18.519 Could not set queue depth (nvme0n1) 00:35:18.519 Could not set queue depth (nvme0n2) 00:35:18.520 Could not set queue depth (nvme0n3) 00:35:18.520 Could not set queue depth (nvme0n4) 00:35:18.779 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:18.779 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:18.779 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:18.779 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:18.779 fio-3.35 00:35:18.779 Starting 4 threads 00:35:20.163 00:35:20.163 job0: (groupid=0, jobs=1): err= 0: pid=2891817: Mon Dec 9 10:45:57 2024 00:35:20.163 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:35:20.163 slat (nsec): min=1330, max=11869k, avg=99049.56, stdev=680428.73 00:35:20.163 clat (usec): min=5605, max=70587, avg=13904.38, stdev=8649.85 00:35:20.163 lat (usec): min=5610, max=70595, avg=14003.43, stdev=8690.19 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[ 6259], 5.00th=[ 6980], 10.00th=[ 8586], 20.00th=[10159], 00:35:20.163 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:35:20.163 | 70.00th=[13304], 80.00th=[14091], 90.00th=[17957], 95.00th=[27657], 00:35:20.163 | 99.00th=[57934], 99.50th=[60556], 99.90th=[70779], 99.95th=[70779], 00:35:20.163 | 99.99th=[70779] 00:35:20.163 write: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1005msec); 0 zone resets 00:35:20.163 slat (nsec): min=1985, max=24895k, avg=126606.59, stdev=906384.43 00:35:20.163 clat (usec): min=1399, max=63915, avg=15669.11, stdev=10295.24 00:35:20.163 lat (usec): min=1503, max=68967, avg=15795.72, stdev=10407.44 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[ 5211], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[ 9503], 00:35:20.163 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10814], 60.00th=[11863], 00:35:20.163 | 70.00th=[14222], 80.00th=[20055], 90.00th=[34866], 95.00th=[39060], 00:35:20.163 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[55837], 00:35:20.163 | 99.99th=[63701] 00:35:20.163 bw ( KiB/s): min=14512, max=20480, per=23.91%, avg=17496.00, stdev=4220.01, samples=2 00:35:20.163 iops : min= 3628, max= 5120, avg=4374.00, stdev=1055.00, samples=2 00:35:20.163 lat (msec) : 2=0.07%, 10=24.59%, 20=61.05%, 50=12.61%, 100=1.69% 00:35:20.163 cpu : usr=3.49%, sys=5.28%, ctx=356, majf=0, minf=1 00:35:20.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:20.163 issued rwts: total=4096,4502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:20.163 job1: (groupid=0, jobs=1): err= 0: pid=2891818: Mon Dec 9 10:45:57 2024 00:35:20.163 read: IOPS=4114, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1007msec) 00:35:20.163 slat (nsec): min=1260, max=11672k, avg=96970.16, stdev=624240.64 00:35:20.163 clat (usec): min=2673, max=29986, avg=12650.87, stdev=3454.14 00:35:20.163 lat (usec): min=2680, max=29994, avg=12747.84, stdev=3485.06 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10028], 00:35:20.163 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:35:20.163 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17171], 95.00th=[20055], 00:35:20.163 | 99.00th=[23462], 99.50th=[24511], 99.90th=[30016], 99.95th=[30016], 00:35:20.163 | 99.99th=[30016] 00:35:20.163 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:35:20.163 slat (nsec): min=1820, max=10232k, avg=119788.34, stdev=630532.35 00:35:20.163 clat (usec): min=2985, max=62312, avg=16238.59, stdev=9775.22 00:35:20.163 lat (usec): min=2992, max=62321, avg=16358.38, stdev=9839.06 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[ 4686], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10028], 00:35:20.163 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11863], 60.00th=[12649], 00:35:20.163 | 70.00th=[17957], 80.00th=[22938], 90.00th=[31589], 95.00th=[39060], 00:35:20.163 | 99.00th=[48497], 99.50th=[52167], 99.90th=[62129], 99.95th=[62129], 00:35:20.163 | 99.99th=[62129] 00:35:20.163 bw ( KiB/s): min=17528, max=18696, per=24.75%, avg=18112.00, stdev=825.90, samples=2 00:35:20.163 iops : min= 4382, max= 4674, avg=4528.00, stdev=206.48, samples=2 00:35:20.163 lat (msec) : 4=0.29%, 10=19.23%, 20=65.36%, 50=14.73%, 100=0.39% 00:35:20.163 cpu : usr=3.28%, sys=4.37%, ctx=456, majf=0, minf=1 00:35:20.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:20.163 issued rwts: total=4143,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:20.163 job2: (groupid=0, jobs=1): err= 0: pid=2891819: Mon Dec 9 10:45:57 2024 00:35:20.163 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:35:20.163 slat (nsec): min=1531, max=12325k, avg=105894.23, stdev=830221.88 00:35:20.163 clat (usec): min=1029, max=53775, avg=12924.27, stdev=4326.95 00:35:20.163 lat (usec): min=1128, max=53783, avg=13030.16, stdev=4413.24 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[ 3490], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10552], 00:35:20.163 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12256], 60.00th=[12911], 00:35:20.163 | 70.00th=[13435], 80.00th=[14353], 90.00th=[16909], 95.00th=[20055], 00:35:20.163 | 99.00th=[28705], 99.50th=[40109], 99.90th=[51119], 99.95th=[51119], 00:35:20.163 | 99.99th=[53740] 00:35:20.163 write: IOPS=4528, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1010msec); 0 zone resets 00:35:20.163 slat (usec): min=2, max=10792, avg=115.27, stdev=730.20 00:35:20.163 clat (usec): min=1595, max=121409, avg=16424.10, stdev=17583.64 00:35:20.163 lat (usec): min=1609, max=121418, avg=16539.37, stdev=17699.30 00:35:20.163 clat percentiles (msec): 00:35:20.163 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:35:20.163 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:35:20.163 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 27], 95.00th=[ 50], 00:35:20.163 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 122], 99.95th=[ 122], 00:35:20.163 | 99.99th=[ 122] 00:35:20.163 bw ( KiB/s): min=16384, max=19192, per=24.31%, avg=17788.00, stdev=1985.56, samples=2 00:35:20.163 iops : min= 4096, max= 4798, avg=4447.00, stdev=496.39, samples=2 00:35:20.164 lat (msec) : 2=0.45%, 4=0.90%, 10=17.98%, 20=71.12%, 50=6.87% 00:35:20.164 lat (msec) : 100=1.70%, 250=0.98% 00:35:20.164 cpu : usr=3.67%, sys=5.85%, ctx=372, majf=0, minf=1 00:35:20.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:20.164 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:20.164 job3: (groupid=0, jobs=1): err= 0: pid=2891820: Mon Dec 9 10:45:57 2024 00:35:20.164 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:35:20.164 slat (nsec): min=1532, max=10849k, avg=102934.69, stdev=583591.29 00:35:20.164 clat (usec): min=8222, max=35516, avg=13470.45, stdev=3173.12 00:35:20.164 lat (usec): min=8226, max=35522, avg=13573.38, stdev=3210.12 00:35:20.164 clat percentiles (usec): 00:35:20.164 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:35:20.164 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:35:20.164 | 70.00th=[13829], 80.00th=[14615], 90.00th=[16909], 95.00th=[21890], 00:35:20.164 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26608], 99.95th=[29754], 00:35:20.164 | 99.99th=[35390] 00:35:20.164 write: IOPS=4758, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1007msec); 0 zone resets 00:35:20.164 slat (usec): min=2, max=15720, avg=104.42, stdev=658.30 00:35:20.164 clat (usec): min=2765, max=56196, avg=13485.27, stdev=5451.12 00:35:20.164 lat (usec): min=6484, max=56230, avg=13589.69, stdev=5500.14 00:35:20.164 clat percentiles (usec): 00:35:20.164 | 1.00th=[ 7963], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:35:20.164 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:35:20.164 | 70.00th=[12256], 80.00th=[13698], 90.00th=[16450], 95.00th=[24773], 00:35:20.164 | 99.00th=[40633], 99.50th=[43254], 99.90th=[43779], 99.95th=[44303], 00:35:20.164 | 99.99th=[56361] 00:35:20.164 bw ( KiB/s): min=16832, max=20480, per=25.50%, avg=18656.00, stdev=2579.53, samples=2 00:35:20.164 iops : min= 4208, max= 5120, avg=4664.00, stdev=644.88, samples=2 00:35:20.164 lat (msec) : 4=0.01%, 10=3.50%, 20=89.71%, 50=6.77%, 100=0.01% 00:35:20.164 cpu : usr=3.48%, sys=5.96%, ctx=404, majf=0, minf=1 00:35:20.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:20.164 issued rwts: total=4608,4792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:20.164 00:35:20.164 Run status group 0 (all jobs): 00:35:20.164 READ: bw=65.5MiB/s (68.7MB/s), 15.8MiB/s-17.9MiB/s (16.6MB/s-18.7MB/s), io=66.2MiB (69.4MB), run=1005-1010msec 00:35:20.164 WRITE: bw=71.5MiB/s (74.9MB/s), 17.5MiB/s-18.6MiB/s (18.3MB/s-19.5MB/s), io=72.2MiB (75.7MB), run=1005-1010msec 00:35:20.164 00:35:20.164 Disk stats (read/write): 00:35:20.164 nvme0n1: ios=3634/3878, merge=0/0, ticks=27297/34390, in_queue=61687, util=87.17% 00:35:20.164 nvme0n2: ios=3870/4096, merge=0/0, ticks=30496/34221, in_queue=64717, util=91.18% 00:35:20.164 nvme0n3: ios=3640/3854, merge=0/0, ticks=40764/55848, in_queue=96612, util=93.25% 00:35:20.164 nvme0n4: ios=3980/4096, merge=0/0, ticks=17302/17550, in_queue=34852, util=95.50% 00:35:20.164 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:20.164 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2892048 00:35:20.164 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:20.164 10:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:20.164 [global] 00:35:20.164 thread=1 00:35:20.164 invalidate=1 00:35:20.164 rw=read 00:35:20.164 time_based=1 00:35:20.164 runtime=10 00:35:20.164 ioengine=libaio 00:35:20.164 direct=1 00:35:20.164 bs=4096 00:35:20.164 iodepth=1 00:35:20.164 norandommap=1 00:35:20.164 numjobs=1 00:35:20.164 00:35:20.164 [job0] 00:35:20.164 filename=/dev/nvme0n1 00:35:20.164 [job1] 00:35:20.164 filename=/dev/nvme0n2 00:35:20.164 [job2] 00:35:20.164 filename=/dev/nvme0n3 00:35:20.164 [job3] 00:35:20.164 filename=/dev/nvme0n4 00:35:20.164 Could not set queue depth (nvme0n1) 00:35:20.164 Could not set queue depth (nvme0n2) 00:35:20.164 Could not set queue depth (nvme0n3) 00:35:20.164 Could not set queue depth (nvme0n4) 00:35:20.424 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.424 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.424 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.424 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.424 fio-3.35 00:35:20.424 Starting 4 threads 00:35:22.968 10:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:23.229 10:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:23.229 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3379200, buflen=4096 00:35:23.229 fio: pid=2892194, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:23.489 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:23.489 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:23.489 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=569344, buflen=4096 00:35:23.489 fio: pid=2892193, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:23.748 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59797504, buflen=4096 00:35:23.748 fio: pid=2892187, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:23.748 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:23.749 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:24.010 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:24.010 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:24.010 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=339968, buflen=4096 00:35:24.010 fio: pid=2892189, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:24.010 00:35:24.010 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2892187: Mon Dec 9 10:46:01 2024 00:35:24.010 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(57.0MiB/3178msec) 00:35:24.010 slat (usec): min=4, max=15642, avg=10.64, stdev=184.47 00:35:24.010 clat (usec): min=169, max=4621, avg=203.94, stdev=51.70 00:35:24.010 lat (usec): min=179, max=16034, avg=214.58, stdev=194.54 00:35:24.010 clat percentiles (usec): 00:35:24.010 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:35:24.010 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:35:24.010 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:35:24.010 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 449], 99.95th=[ 529], 00:35:24.010 | 99.99th=[ 3425] 00:35:24.010 bw ( KiB/s): min=17928, max=19592, per=100.00%, avg=18564.17, stdev=644.76, samples=6 00:35:24.010 iops : min= 4482, max= 4898, avg=4641.00, stdev=161.20, samples=6 00:35:24.010 lat (usec) : 250=97.87%, 500=2.04%, 750=0.06% 00:35:24.010 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:35:24.010 cpu : usr=2.30%, sys=5.76%, ctx=14604, majf=0, minf=1 00:35:24.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.010 issued rwts: total=14600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.010 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2892189: Mon Dec 9 10:46:01 2024 00:35:24.010 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(332KiB/3423msec) 00:35:24.010 slat (usec): min=10, max=30652, avg=383.91, stdev=3342.39 00:35:24.010 clat (usec): min=530, max=42020, avg=40581.82, stdev=4459.11 00:35:24.010 lat (usec): min=563, max=71917, avg=40970.08, stdev=5629.20 00:35:24.010 clat percentiles (usec): 00:35:24.010 | 1.00th=[ 529], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:24.010 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:24.010 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:24.010 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:24.011 | 99.99th=[42206] 00:35:24.011 bw ( KiB/s): min= 86, max= 104, per=0.53%, avg=97.00, stdev= 6.66, samples=6 00:35:24.011 iops : min= 21, max= 26, avg=24.17, stdev= 1.83, samples=6 00:35:24.011 lat (usec) : 750=1.19% 00:35:24.011 lat (msec) : 50=97.62% 00:35:24.011 cpu : usr=0.00%, sys=0.06%, ctx=89, majf=0, minf=2 00:35:24.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.011 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2892193: Mon Dec 9 10:46:01 2024 00:35:24.011 read: IOPS=46, BW=186KiB/s (191kB/s)(556KiB/2987msec) 00:35:24.011 slat (nsec): min=8821, max=35154, avg=17786.44, stdev=7457.59 00:35:24.011 clat (usec): min=222, max=42427, avg=21308.24, stdev=20396.03 00:35:24.011 lat (usec): min=233, max=42438, avg=21325.97, stdev=20394.37 00:35:24.011 clat percentiles (usec): 00:35:24.011 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:35:24.011 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[40633], 60.00th=[40633], 00:35:24.011 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:24.011 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:24.011 | 99.99th=[42206] 00:35:24.011 bw ( KiB/s): min= 128, max= 208, per=0.98%, avg=180.80, stdev=33.27, samples=5 00:35:24.011 iops : min= 32, max= 52, avg=45.20, stdev= 8.32, samples=5 00:35:24.011 lat (usec) : 250=39.29%, 500=8.57% 00:35:24.011 lat (msec) : 50=51.43% 00:35:24.011 cpu : usr=0.00%, sys=0.20%, ctx=144, majf=0, minf=1 00:35:24.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.011 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2892194: Mon Dec 9 10:46:01 2024 00:35:24.011 read: IOPS=301, BW=1205KiB/s (1234kB/s)(3300KiB/2739msec) 00:35:24.011 slat (usec): min=7, max=146, avg=10.15, stdev= 6.32 00:35:24.011 clat (usec): min=159, max=41992, avg=3281.06, stdev=10658.00 00:35:24.011 lat (usec): min=233, max=42004, avg=3291.19, stdev=10660.14 00:35:24.011 clat percentiles (usec): 00:35:24.011 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:35:24.011 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:35:24.011 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[41157], 00:35:24.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:24.011 | 99.99th=[42206] 00:35:24.011 bw ( KiB/s): min= 96, max= 6112, per=7.16%, avg=1310.40, stdev=2684.22, samples=5 00:35:24.011 iops : min= 24, max= 1528, avg=327.60, stdev=671.05, samples=5 00:35:24.011 lat (usec) : 250=8.84%, 500=83.54%, 750=0.12% 00:35:24.011 lat (msec) : 50=7.38% 00:35:24.011 cpu : usr=0.07%, sys=0.37%, ctx=827, majf=0, minf=2 00:35:24.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.011 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.011 00:35:24.011 Run status group 0 (all jobs): 00:35:24.011 READ: bw=17.9MiB/s (18.7MB/s), 97.0KiB/s-17.9MiB/s (99.3kB/s-18.8MB/s), io=61.1MiB (64.1MB), run=2739-3423msec 00:35:24.011 00:35:24.011 Disk stats (read/write): 00:35:24.011 nvme0n1: ios=14378/0, merge=0/0, ticks=2809/0, in_queue=2809, util=94.48% 00:35:24.011 nvme0n2: ios=116/0, merge=0/0, ticks=4008/0, in_queue=4008, util=98.65% 00:35:24.011 nvme0n3: ios=173/0, merge=0/0, ticks=3842/0, in_queue=3842, util=99.59% 00:35:24.011 nvme0n4: ios=845/0, merge=0/0, ticks=2835/0, in_queue=2835, util=100.00% 00:35:24.272 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:24.272 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:24.272 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:24.272 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:24.533 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:24.533 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:24.794 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:24.794 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2892048 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:25.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:25.055 nvmf hotplug test: fio failed as expected 00:35:25.055 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:25.315 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.316 rmmod nvme_tcp 00:35:25.316 rmmod nvme_fabrics 00:35:25.316 rmmod nvme_keyring 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2889426 ']' 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2889426 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2889426 ']' 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2889426 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.316 10:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2889426 00:35:25.316 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.316 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.316 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2889426' 00:35:25.316 killing process with pid 2889426 00:35:25.316 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2889426 00:35:25.316 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2889426 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.577 10:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.212 00:35:28.212 real 0m25.757s 00:35:28.212 user 1m32.531s 00:35:28.212 sys 0m11.092s 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:28.212 ************************************ 00:35:28.212 END TEST nvmf_fio_target 00:35:28.212 ************************************ 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:28.212 ************************************ 00:35:28.212 START TEST nvmf_bdevio 00:35:28.212 ************************************ 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:28.212 * Looking for test storage... 00:35:28.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:28.212 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.213 --rc genhtml_branch_coverage=1 00:35:28.213 --rc genhtml_function_coverage=1 00:35:28.213 --rc genhtml_legend=1 00:35:28.213 --rc geninfo_all_blocks=1 00:35:28.213 --rc geninfo_unexecuted_blocks=1 00:35:28.213 00:35:28.213 ' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.213 --rc genhtml_branch_coverage=1 00:35:28.213 --rc genhtml_function_coverage=1 00:35:28.213 --rc genhtml_legend=1 00:35:28.213 --rc geninfo_all_blocks=1 00:35:28.213 --rc geninfo_unexecuted_blocks=1 00:35:28.213 00:35:28.213 ' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.213 --rc genhtml_branch_coverage=1 00:35:28.213 --rc genhtml_function_coverage=1 00:35:28.213 --rc genhtml_legend=1 00:35:28.213 --rc geninfo_all_blocks=1 00:35:28.213 --rc geninfo_unexecuted_blocks=1 00:35:28.213 00:35:28.213 ' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.213 --rc genhtml_branch_coverage=1 00:35:28.213 --rc genhtml_function_coverage=1 00:35:28.213 --rc genhtml_legend=1 00:35:28.213 --rc geninfo_all_blocks=1 00:35:28.213 --rc geninfo_unexecuted_blocks=1 00:35:28.213 00:35:28.213 ' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.213 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.214 10:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:33.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:33.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:33.507 Found net devices under 0000:86:00.0: cvl_0_0 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:33.507 Found net devices under 0000:86:00.1: cvl_0_1 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.507 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.508 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:33.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:35:33.769 00:35:33.769 --- 10.0.0.2 ping statistics --- 00:35:33.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.769 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:35:33.769 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:35:33.770 00:35:33.770 --- 10.0.0.1 ping statistics --- 00:35:33.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.770 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2896429 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2896429 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2896429 ']' 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.770 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.770 [2024-12-09 10:46:11.463561] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:33.770 [2024-12-09 10:46:11.464572] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:35:33.770 [2024-12-09 10:46:11.464613] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.031 [2024-12-09 10:46:11.545900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:34.031 [2024-12-09 10:46:11.593418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.031 [2024-12-09 10:46:11.593454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.031 [2024-12-09 10:46:11.593461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.031 [2024-12-09 10:46:11.593467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.031 [2024-12-09 10:46:11.593472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.031 [2024-12-09 10:46:11.594924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:34.031 [2024-12-09 10:46:11.595029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:34.031 [2024-12-09 10:46:11.595045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:34.031 [2024-12-09 10:46:11.595050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:34.031 [2024-12-09 10:46:11.663696] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:34.031 [2024-12-09 10:46:11.664008] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:34.031 [2024-12-09 10:46:11.664046] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:34.031 [2024-12-09 10:46:11.664281] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:34.031 [2024-12-09 10:46:11.664382] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:34.031 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.031 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:34.031 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.031 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.031 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.032 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.032 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:34.032 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.032 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.032 [2024-12-09 10:46:11.739885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.293 Malloc0 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.293 [2024-12-09 10:46:11.836157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.293 { 00:35:34.293 "params": { 00:35:34.293 "name": "Nvme$subsystem", 00:35:34.293 "trtype": "$TEST_TRANSPORT", 00:35:34.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.293 "adrfam": "ipv4", 00:35:34.293 "trsvcid": "$NVMF_PORT", 00:35:34.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.293 "hdgst": ${hdgst:-false}, 00:35:34.293 "ddgst": ${ddgst:-false} 00:35:34.293 }, 00:35:34.293 "method": "bdev_nvme_attach_controller" 00:35:34.293 } 00:35:34.293 EOF 00:35:34.293 )") 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:34.293 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:34.294 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.294 "params": { 00:35:34.294 "name": "Nvme1", 00:35:34.294 "trtype": "tcp", 00:35:34.294 "traddr": "10.0.0.2", 00:35:34.294 "adrfam": "ipv4", 00:35:34.294 "trsvcid": "4420", 00:35:34.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.294 "hdgst": false, 00:35:34.294 "ddgst": false 00:35:34.294 }, 00:35:34.294 "method": "bdev_nvme_attach_controller" 00:35:34.294 }' 00:35:34.294 [2024-12-09 10:46:11.887583] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:35:34.294 [2024-12-09 10:46:11.887631] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896452 ] 00:35:34.294 [2024-12-09 10:46:11.966987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:34.294 [2024-12-09 10:46:12.011327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.294 [2024-12-09 10:46:12.011433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.294 [2024-12-09 10:46:12.011434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.865 I/O targets: 00:35:34.865 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:34.865 00:35:34.865 00:35:34.865 CUnit - A unit testing framework for C - Version 2.1-3 00:35:34.865 http://cunit.sourceforge.net/ 00:35:34.865 00:35:34.865 00:35:34.865 Suite: bdevio tests on: Nvme1n1 00:35:34.865 Test: blockdev write read block ...passed 00:35:34.865 Test: blockdev write zeroes read block ...passed 00:35:34.865 Test: blockdev write zeroes read no split ...passed 00:35:34.865 Test: blockdev write zeroes read split ...passed 00:35:34.865 Test: blockdev write zeroes read split partial ...passed 00:35:34.865 Test: blockdev reset ...[2024-12-09 10:46:12.516371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:34.865 [2024-12-09 10:46:12.516438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbbf30 (9): Bad file descriptor 00:35:34.865 [2024-12-09 10:46:12.519758] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:34.865 passed 00:35:34.865 Test: blockdev write read 8 blocks ...passed 00:35:35.126 Test: blockdev write read size > 128k ...passed 00:35:35.126 Test: blockdev write read invalid size ...passed 00:35:35.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:35.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:35.126 Test: blockdev write read max offset ...passed 00:35:35.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:35.126 Test: blockdev writev readv 8 blocks ...passed 00:35:35.126 Test: blockdev writev readv 30 x 1block ...passed 00:35:35.126 Test: blockdev writev readv block ...passed 00:35:35.126 Test: blockdev writev readv size > 128k ...passed 00:35:35.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:35.126 Test: blockdev comparev and writev ...[2024-12-09 10:46:12.812074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:35.126 [2024-12-09 10:46:12.812113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.126 [2024-12-09 10:46:12.812408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:35.126 [2024-12-09 10:46:12.812429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:35.126 [2024-12-09 10:46:12.812723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:35.126 [2024-12-09 10:46:12.812745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.126 [2024-12-09 10:46:12.812752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:35.127 [2024-12-09 10:46:12.813037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.127 [2024-12-09 10:46:12.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:35.127 [2024-12-09 10:46:12.813060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:35.127 [2024-12-09 10:46:12.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:35.388 passed 00:35:35.388 Test: blockdev nvme passthru rw ...passed 00:35:35.388 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:46:12.895113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:35.388 [2024-12-09 10:46:12.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:35.388 [2024-12-09 10:46:12.895247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:35.388 [2024-12-09 10:46:12.895256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:35.388 [2024-12-09 10:46:12.895371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:35.388 [2024-12-09 10:46:12.895379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:35.388 [2024-12-09 10:46:12.895499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:35.388 [2024-12-09 10:46:12.895508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:35.388 passed 00:35:35.388 Test: blockdev nvme admin passthru ...passed 00:35:35.388 Test: blockdev copy ...passed 00:35:35.388 00:35:35.388 Run Summary: Type Total Ran Passed Failed Inactive 00:35:35.388 suites 1 1 n/a 0 0 00:35:35.388 tests 23 23 23 0 0 00:35:35.388 asserts 152 152 152 0 n/a 00:35:35.388 00:35:35.388 Elapsed time = 1.171 seconds 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:35.388 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:35.388 rmmod nvme_tcp 00:35:35.649 rmmod nvme_fabrics 00:35:35.649 rmmod nvme_keyring 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2896429 ']' 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2896429 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2896429 ']' 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2896429 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896429 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896429' 00:35:35.649 killing process with pid 2896429 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2896429 00:35:35.649 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2896429 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.910 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.824 00:35:37.824 real 0m10.115s 00:35:37.824 user 0m9.921s 00:35:37.824 sys 0m5.220s 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.824 ************************************ 00:35:37.824 END TEST nvmf_bdevio 00:35:37.824 ************************************ 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:37.824 00:35:37.824 real 4m34.183s 00:35:37.824 user 9m8.425s 00:35:37.824 sys 1m51.362s 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.824 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:37.824 ************************************ 00:35:37.824 END TEST nvmf_target_core_interrupt_mode 00:35:37.824 ************************************ 00:35:37.824 10:46:15 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:37.824 10:46:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:37.824 10:46:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.824 10:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:38.085 ************************************ 00:35:38.085 START TEST nvmf_interrupt 00:35:38.085 ************************************ 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:38.085 * Looking for test storage... 00:35:38.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:38.085 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.086 --rc genhtml_branch_coverage=1 00:35:38.086 --rc genhtml_function_coverage=1 00:35:38.086 --rc genhtml_legend=1 00:35:38.086 --rc geninfo_all_blocks=1 00:35:38.086 --rc geninfo_unexecuted_blocks=1 00:35:38.086 00:35:38.086 ' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.086 --rc genhtml_branch_coverage=1 00:35:38.086 --rc genhtml_function_coverage=1 00:35:38.086 --rc genhtml_legend=1 00:35:38.086 --rc geninfo_all_blocks=1 00:35:38.086 --rc geninfo_unexecuted_blocks=1 00:35:38.086 00:35:38.086 ' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.086 --rc genhtml_branch_coverage=1 00:35:38.086 --rc genhtml_function_coverage=1 00:35:38.086 --rc genhtml_legend=1 00:35:38.086 --rc geninfo_all_blocks=1 00:35:38.086 --rc geninfo_unexecuted_blocks=1 00:35:38.086 00:35:38.086 ' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.086 --rc genhtml_branch_coverage=1 00:35:38.086 --rc genhtml_function_coverage=1 00:35:38.086 --rc genhtml_legend=1 00:35:38.086 --rc geninfo_all_blocks=1 00:35:38.086 --rc geninfo_unexecuted_blocks=1 00:35:38.086 00:35:38.086 ' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.086 10:46:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:44.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:44.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:44.694 Found net devices under 0000:86:00.0: cvl_0_0 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:44.694 Found net devices under 0000:86:00.1: cvl_0_1 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.694 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:35:44.695 00:35:44.695 --- 10.0.0.2 ping statistics --- 00:35:44.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.695 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:35:44.695 00:35:44.695 --- 10.0.0.1 ping statistics --- 00:35:44.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.695 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2900221 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2900221 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2900221 ']' 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.695 [2024-12-09 10:46:21.753644] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:44.695 [2024-12-09 10:46:21.754550] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:35:44.695 [2024-12-09 10:46:21.754580] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.695 [2024-12-09 10:46:21.834041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:44.695 [2024-12-09 10:46:21.874819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.695 [2024-12-09 10:46:21.874854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.695 [2024-12-09 10:46:21.874861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.695 [2024-12-09 10:46:21.874867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.695 [2024-12-09 10:46:21.874872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.695 [2024-12-09 10:46:21.876044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.695 [2024-12-09 10:46:21.876046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.695 [2024-12-09 10:46:21.943143] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:44.695 [2024-12-09 10:46:21.943690] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:44.695 [2024-12-09 10:46:21.943884] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.695 10:46:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.695 10:46:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.695 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:44.695 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:44.695 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:44.695 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:44.696 5000+0 records in 00:35:44.696 5000+0 records out 00:35:44.696 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0166912 s, 613 MB/s 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 AIO0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 [2024-12-09 10:46:22.068825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.696 [2024-12-09 10:46:22.109137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2900221 0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 0 idle 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900221 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0' 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900221 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2900221 1 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 1 idle 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:44.696 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900226 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900226 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:44.960 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2900367 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2900221 0 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2900221 0 busy 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900221 root 20 0 128.2g 47616 34560 S 13.3 0.0 0:00.26 reactor_0' 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900221 root 20 0 128.2g 47616 34560 S 13.3 0.0 0:00.26 reactor_0 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:44.961 10:46:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900221 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.54 reactor_0' 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900221 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.54 reactor_0 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2900221 1 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2900221 1 busy 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.345 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:46.346 10:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900226 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.33 reactor_1' 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900226 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.33 reactor_1 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.346 10:46:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2900367 00:35:56.359 Initializing NVMe Controllers 00:35:56.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:56.359 Controller IO queue size 256, less than required. 00:35:56.359 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:56.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:56.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:56.359 Initialization complete. Launching workers. 00:35:56.359 ======================================================== 00:35:56.359 Latency(us) 00:35:56.359 Device Information : IOPS MiB/s Average min max 00:35:56.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17070.80 66.68 15002.75 2825.13 28681.40 00:35:56.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16845.40 65.80 15200.77 7472.55 27608.67 00:35:56.359 ======================================================== 00:35:56.359 Total : 33916.20 132.49 15101.10 2825.13 28681.40 00:35:56.359 00:35:56.359 10:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:56.359 10:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2900221 0 00:35:56.359 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 0 idle 00:35:56.359 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900221 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900221 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2900221 1 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 1 idle 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:56.360 10:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900226 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900226 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:56.360 10:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2900221 0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 0 idle 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900221 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.47 reactor_0' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900221 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.47 reactor_0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2900221 1 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2900221 1 idle 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2900221 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2900221 -w 256 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2900226 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2900226 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:58.270 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:58.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:58.530 10:46:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:58.531 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:58.531 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:58.531 10:46:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:58.531 rmmod nvme_tcp 00:35:58.531 rmmod nvme_fabrics 00:35:58.531 rmmod nvme_keyring 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2900221 ']' 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2900221 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2900221 ']' 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2900221 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900221 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900221' 00:35:58.531 killing process with pid 2900221 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2900221 00:35:58.531 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2900221 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:58.791 10:46:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.704 10:46:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.704 00:36:00.704 real 0m22.852s 00:36:00.704 user 0m39.588s 00:36:00.704 sys 0m8.581s 00:36:00.704 10:46:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.704 10:46:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:00.704 ************************************ 00:36:00.704 END TEST nvmf_interrupt 00:36:00.704 ************************************ 00:36:00.964 00:36:00.965 real 27m33.719s 00:36:00.965 user 57m10.962s 00:36:00.965 sys 9m21.634s 00:36:00.965 10:46:38 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.965 10:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.965 ************************************ 00:36:00.965 END TEST nvmf_tcp 00:36:00.965 ************************************ 00:36:00.965 10:46:38 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:00.965 10:46:38 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:00.965 10:46:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:00.965 10:46:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.965 10:46:38 -- common/autotest_common.sh@10 -- # set +x 00:36:00.965 ************************************ 00:36:00.965 START TEST spdkcli_nvmf_tcp 00:36:00.965 ************************************ 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:00.965 * Looking for test storage... 00:36:00.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.965 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:01.226 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:01.226 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.226 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:01.226 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.226 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.227 --rc genhtml_branch_coverage=1 00:36:01.227 --rc genhtml_function_coverage=1 00:36:01.227 --rc genhtml_legend=1 00:36:01.227 --rc geninfo_all_blocks=1 00:36:01.227 --rc geninfo_unexecuted_blocks=1 00:36:01.227 00:36:01.227 ' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.227 --rc genhtml_branch_coverage=1 00:36:01.227 --rc genhtml_function_coverage=1 00:36:01.227 --rc genhtml_legend=1 00:36:01.227 --rc geninfo_all_blocks=1 00:36:01.227 --rc geninfo_unexecuted_blocks=1 00:36:01.227 00:36:01.227 ' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.227 --rc genhtml_branch_coverage=1 00:36:01.227 --rc genhtml_function_coverage=1 00:36:01.227 --rc genhtml_legend=1 00:36:01.227 --rc geninfo_all_blocks=1 00:36:01.227 --rc geninfo_unexecuted_blocks=1 00:36:01.227 00:36:01.227 ' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.227 --rc genhtml_branch_coverage=1 00:36:01.227 --rc genhtml_function_coverage=1 00:36:01.227 --rc genhtml_legend=1 00:36:01.227 --rc geninfo_all_blocks=1 00:36:01.227 --rc geninfo_unexecuted_blocks=1 00:36:01.227 00:36:01.227 ' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2903105 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2903105 00:36:01.227 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2903105 ']' 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.228 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.228 [2024-12-09 10:46:38.778771] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:36:01.228 [2024-12-09 10:46:38.778825] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903105 ] 00:36:01.228 [2024-12-09 10:46:38.855442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:01.228 [2024-12-09 10:46:38.898772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.228 [2024-12-09 10:46:38.898774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.488 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.488 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:01.488 10:46:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:01.488 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.488 10:46:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.488 10:46:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:01.488 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:01.488 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:01.488 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:01.488 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:01.488 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:01.488 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:01.488 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:01.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:01.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:01.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:01.488 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:01.488 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:01.489 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:01.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:01.489 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:01.489 ' 00:36:04.026 [2024-12-09 10:46:41.712747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.404 [2024-12-09 10:46:43.053217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:07.936 [2024-12-09 10:46:45.537034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:10.473 [2024-12-09 10:46:47.707846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:11.850 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:11.850 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:11.850 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.850 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.850 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:11.851 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:11.851 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:11.851 10:46:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.419 10:46:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:12.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:12.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:12.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:12.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:12.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:12.419 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:12.419 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:12.419 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:12.419 ' 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:18.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:18.988 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:18.988 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:18.988 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2903105 ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903105' 00:36:18.988 killing process with pid 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2903105 ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2903105 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2903105 ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2903105 00:36:18.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2903105) - No such process 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2903105 is not found' 00:36:18.988 Process with pid 2903105 is not found 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:18.988 00:36:18.988 real 0m17.329s 00:36:18.988 user 0m38.170s 00:36:18.988 sys 0m0.804s 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.988 10:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.988 ************************************ 00:36:18.988 END TEST spdkcli_nvmf_tcp 00:36:18.988 ************************************ 00:36:18.988 10:46:55 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:18.988 10:46:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:18.988 10:46:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:18.988 10:46:55 -- common/autotest_common.sh@10 -- # set +x 00:36:18.988 ************************************ 00:36:18.988 START TEST nvmf_identify_passthru 00:36:18.988 ************************************ 00:36:18.988 10:46:55 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:18.988 * Looking for test storage... 00:36:18.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:18.988 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:18.988 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:36:18.988 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:18.988 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.988 10:46:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:18.989 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.989 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.989 --rc genhtml_branch_coverage=1 00:36:18.989 --rc genhtml_function_coverage=1 00:36:18.989 --rc genhtml_legend=1 00:36:18.989 --rc geninfo_all_blocks=1 00:36:18.989 --rc geninfo_unexecuted_blocks=1 00:36:18.989 00:36:18.989 ' 00:36:18.989 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.989 --rc genhtml_branch_coverage=1 00:36:18.989 --rc genhtml_function_coverage=1 00:36:18.989 --rc genhtml_legend=1 00:36:18.989 --rc geninfo_all_blocks=1 00:36:18.989 --rc geninfo_unexecuted_blocks=1 00:36:18.989 00:36:18.989 ' 00:36:18.989 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.989 --rc genhtml_branch_coverage=1 00:36:18.989 --rc genhtml_function_coverage=1 00:36:18.989 --rc genhtml_legend=1 00:36:18.989 --rc geninfo_all_blocks=1 00:36:18.989 --rc geninfo_unexecuted_blocks=1 00:36:18.989 00:36:18.989 ' 00:36:18.989 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.989 --rc genhtml_branch_coverage=1 00:36:18.989 --rc genhtml_function_coverage=1 00:36:18.989 --rc genhtml_legend=1 00:36:18.989 --rc geninfo_all_blocks=1 00:36:18.989 --rc geninfo_unexecuted_blocks=1 00:36:18.989 00:36:18.989 ' 00:36:18.989 10:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.989 10:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.989 10:46:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:18.989 10:46:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.989 10:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.989 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.990 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:18.990 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:18.990 10:46:56 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:18.990 10:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:24.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:24.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:24.293 Found net devices under 0000:86:00.0: cvl_0_0 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:24.293 Found net devices under 0000:86:00.1: cvl_0_1 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:24.293 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:36:24.294 00:36:24.294 --- 10.0.0.2 ping statistics --- 00:36:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.294 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:36:24.294 00:36:24.294 --- 10.0.0.1 ping statistics --- 00:36:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.294 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.294 10:47:01 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:36:24.554 10:47:02 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:24.554 10:47:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:29.902 10:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:36:29.902 10:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:29.902 10:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:29.902 10:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2910435 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:34.131 10:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2910435 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2910435 ']' 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.131 10:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.131 [2024-12-09 10:47:11.726752] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:36:34.131 [2024-12-09 10:47:11.726796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.132 [2024-12-09 10:47:11.804918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.132 [2024-12-09 10:47:11.847514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.132 [2024-12-09 10:47:11.847553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.132 [2024-12-09 10:47:11.847560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.132 [2024-12-09 10:47:11.847566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.132 [2024-12-09 10:47:11.847571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.132 [2024-12-09 10:47:11.849078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.132 [2024-12-09 10:47:11.849117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.132 [2024-12-09 10:47:11.849152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.132 [2024-12-09 10:47:11.849153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:35.069 10:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.069 INFO: Log level set to 20 00:36:35.069 INFO: Requests: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "method": "nvmf_set_config", 00:36:35.069 "id": 1, 00:36:35.069 "params": { 00:36:35.069 "admin_cmd_passthru": { 00:36:35.069 "identify_ctrlr": true 00:36:35.069 } 00:36:35.069 } 00:36:35.069 } 00:36:35.069 00:36:35.069 INFO: response: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "id": 1, 00:36:35.069 "result": true 00:36:35.069 } 00:36:35.069 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.069 10:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.069 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.069 INFO: Setting log level to 20 00:36:35.069 INFO: Setting log level to 20 00:36:35.069 INFO: Log level set to 20 00:36:35.069 INFO: Log level set to 20 00:36:35.069 INFO: Requests: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "method": "framework_start_init", 00:36:35.069 "id": 1 00:36:35.069 } 00:36:35.069 00:36:35.069 INFO: Requests: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "method": "framework_start_init", 00:36:35.069 "id": 1 00:36:35.069 } 00:36:35.069 00:36:35.069 [2024-12-09 10:47:12.647660] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:35.069 INFO: response: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "id": 1, 00:36:35.069 "result": true 00:36:35.069 } 00:36:35.069 00:36:35.069 INFO: response: 00:36:35.069 { 00:36:35.069 "jsonrpc": "2.0", 00:36:35.069 "id": 1, 00:36:35.069 "result": true 00:36:35.069 } 00:36:35.069 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.070 10:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.070 INFO: Setting log level to 40 00:36:35.070 INFO: Setting log level to 40 00:36:35.070 INFO: Setting log level to 40 00:36:35.070 [2024-12-09 10:47:12.660961] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.070 10:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.070 10:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.070 10:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.357 Nvme0n1 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.357 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.357 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.357 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.357 [2024-12-09 10:47:15.571346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.357 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.357 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.357 [ 00:36:38.357 { 00:36:38.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:38.357 "subtype": "Discovery", 00:36:38.357 "listen_addresses": [], 00:36:38.357 "allow_any_host": true, 00:36:38.357 "hosts": [] 00:36:38.357 }, 00:36:38.357 { 00:36:38.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:38.358 "subtype": "NVMe", 00:36:38.358 "listen_addresses": [ 00:36:38.358 { 00:36:38.358 "trtype": "TCP", 00:36:38.358 "adrfam": "IPv4", 00:36:38.358 "traddr": "10.0.0.2", 00:36:38.358 "trsvcid": "4420" 00:36:38.358 } 00:36:38.358 ], 00:36:38.358 "allow_any_host": true, 00:36:38.358 "hosts": [], 00:36:38.358 "serial_number": "SPDK00000000000001", 00:36:38.358 "model_number": "SPDK bdev Controller", 00:36:38.358 "max_namespaces": 1, 00:36:38.358 "min_cntlid": 1, 00:36:38.358 "max_cntlid": 65519, 00:36:38.358 "namespaces": [ 00:36:38.358 { 00:36:38.358 "nsid": 1, 00:36:38.358 "bdev_name": "Nvme0n1", 00:36:38.358 "name": "Nvme0n1", 00:36:38.358 "nguid": "0AFCD0DB10794F15AE94207030DC1E56", 00:36:38.358 "uuid": "0afcd0db-1079-4f15-ae94-207030dc1e56" 00:36:38.358 } 00:36:38.358 ] 00:36:38.358 } 00:36:38.358 ] 00:36:38.358 10:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:38.358 10:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.358 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.358 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.358 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:38.358 10:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.358 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.358 rmmod nvme_tcp 00:36:38.358 rmmod nvme_fabrics 00:36:38.616 rmmod nvme_keyring 00:36:38.616 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.616 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:38.616 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:38.616 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2910435 ']' 00:36:38.616 10:47:16 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2910435 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2910435 ']' 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2910435 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910435 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910435' 00:36:38.616 killing process with pid 2910435 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2910435 00:36:38.616 10:47:16 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2910435 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:40.526 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:40.527 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:40.527 10:47:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.527 10:47:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.527 10:47:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.070 10:47:20 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:43.070 00:36:43.070 real 0m24.374s 00:36:43.070 user 0m33.411s 00:36:43.070 sys 0m6.291s 00:36:43.070 10:47:20 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.070 10:47:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.070 ************************************ 00:36:43.070 END TEST nvmf_identify_passthru 00:36:43.070 ************************************ 00:36:43.070 10:47:20 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:43.070 10:47:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:43.070 10:47:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:43.070 10:47:20 -- common/autotest_common.sh@10 -- # set +x 00:36:43.070 ************************************ 00:36:43.070 START TEST nvmf_dif 00:36:43.070 ************************************ 00:36:43.070 10:47:20 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:43.070 * Looking for test storage... 00:36:43.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:43.070 10:47:20 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:43.070 10:47:20 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:36:43.070 10:47:20 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:43.070 10:47:20 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:43.071 10:47:20 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:43.071 10:47:20 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:43.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.071 --rc genhtml_branch_coverage=1 00:36:43.071 --rc genhtml_function_coverage=1 00:36:43.071 --rc genhtml_legend=1 00:36:43.071 --rc geninfo_all_blocks=1 00:36:43.071 --rc geninfo_unexecuted_blocks=1 00:36:43.071 00:36:43.071 ' 00:36:43.071 10:47:20 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:43.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.071 --rc genhtml_branch_coverage=1 00:36:43.071 --rc genhtml_function_coverage=1 00:36:43.071 --rc genhtml_legend=1 00:36:43.071 --rc geninfo_all_blocks=1 00:36:43.071 --rc geninfo_unexecuted_blocks=1 00:36:43.071 00:36:43.071 ' 00:36:43.071 10:47:20 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:43.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.071 --rc genhtml_branch_coverage=1 00:36:43.071 --rc genhtml_function_coverage=1 00:36:43.071 --rc genhtml_legend=1 00:36:43.071 --rc geninfo_all_blocks=1 00:36:43.071 --rc geninfo_unexecuted_blocks=1 00:36:43.071 00:36:43.071 ' 00:36:43.071 10:47:20 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:43.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.071 --rc genhtml_branch_coverage=1 00:36:43.071 --rc genhtml_function_coverage=1 00:36:43.071 --rc genhtml_legend=1 00:36:43.071 --rc geninfo_all_blocks=1 00:36:43.071 --rc geninfo_unexecuted_blocks=1 00:36:43.071 00:36:43.071 ' 00:36:43.071 10:47:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.071 10:47:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.071 10:47:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.071 10:47:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.071 10:47:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.071 10:47:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:43.071 10:47:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:43.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:43.071 10:47:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:43.072 10:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:43.072 10:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:43.072 10:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:43.072 10:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:43.072 10:47:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.072 10:47:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:43.072 10:47:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:43.072 10:47:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:43.072 10:47:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:49.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:49.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:49.653 Found net devices under 0000:86:00.0: cvl_0_0 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.653 10:47:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:49.654 Found net devices under 0000:86:00.1: cvl_0_1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:49.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:36:49.654 00:36:49.654 --- 10.0.0.2 ping statistics --- 00:36:49.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.654 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:36:49.654 00:36:49.654 --- 10.0.0.1 ping statistics --- 00:36:49.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.654 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:49.654 10:47:26 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:51.563 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:51.563 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:51.563 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.823 10:47:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:51.823 10:47:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2916130 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2916130 00:36:51.823 10:47:29 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2916130 ']' 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.823 10:47:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.823 [2024-12-09 10:47:29.392344] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:36:51.823 [2024-12-09 10:47:29.392387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.823 [2024-12-09 10:47:29.472129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.823 [2024-12-09 10:47:29.513118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.823 [2024-12-09 10:47:29.513154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.823 [2024-12-09 10:47:29.513161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.823 [2024-12-09 10:47:29.513167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.823 [2024-12-09 10:47:29.513172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.823 [2024-12-09 10:47:29.513695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:52.084 10:47:29 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 10:47:29 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.084 10:47:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:52.084 10:47:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 [2024-12-09 10:47:29.646372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 10:47:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 ************************************ 00:36:52.084 START TEST fio_dif_1_default 00:36:52.084 ************************************ 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 bdev_null0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 [2024-12-09 10:47:29.714661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.084 { 00:36:52.084 "params": { 00:36:52.084 "name": "Nvme$subsystem", 00:36:52.084 "trtype": "$TEST_TRANSPORT", 00:36:52.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.084 "adrfam": "ipv4", 00:36:52.084 "trsvcid": "$NVMF_PORT", 00:36:52.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.084 "hdgst": ${hdgst:-false}, 00:36:52.084 "ddgst": ${ddgst:-false} 00:36:52.084 }, 00:36:52.084 "method": "bdev_nvme_attach_controller" 00:36:52.084 } 00:36:52.084 EOF 00:36:52.084 )") 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:52.084 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:52.085 "params": { 00:36:52.085 "name": "Nvme0", 00:36:52.085 "trtype": "tcp", 00:36:52.085 "traddr": "10.0.0.2", 00:36:52.085 "adrfam": "ipv4", 00:36:52.085 "trsvcid": "4420", 00:36:52.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.085 "hdgst": false, 00:36:52.085 "ddgst": false 00:36:52.085 }, 00:36:52.085 "method": "bdev_nvme_attach_controller" 00:36:52.085 }' 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:52.085 10:47:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:52.657 fio-3.35 00:36:52.657 Starting 1 thread 00:37:04.898 00:37:04.898 filename0: (groupid=0, jobs=1): err= 0: pid=2916508: Mon Dec 9 10:47:40 2024 00:37:04.898 read: IOPS=207, BW=830KiB/s (850kB/s)(8320KiB/10020msec) 00:37:04.898 slat (nsec): min=5812, max=27152, avg=6064.88, stdev=823.01 00:37:04.898 clat (usec): min=366, max=44865, avg=19252.49, stdev=20330.80 00:37:04.898 lat (usec): min=372, max=44892, avg=19258.56, stdev=20330.77 00:37:04.898 clat percentiles (usec): 00:37:04.898 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 404], 00:37:04.898 | 30.00th=[ 412], 40.00th=[ 449], 50.00th=[ 594], 60.00th=[40633], 00:37:04.898 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:37:04.898 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:37:04.898 | 99.99th=[44827] 00:37:04.898 bw ( KiB/s): min= 672, max= 961, per=99.96%, avg=830.50, stdev=80.78, samples=20 00:37:04.898 iops : min= 168, max= 240, avg=207.60, stdev=20.18, samples=20 00:37:04.898 lat (usec) : 500=46.06%, 750=7.79% 00:37:04.898 lat (msec) : 50=46.15% 00:37:04.898 cpu : usr=92.17%, sys=7.58%, ctx=14, majf=0, minf=0 00:37:04.898 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.898 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.898 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:04.898 00:37:04.898 Run status group 0 (all jobs): 00:37:04.898 READ: bw=830KiB/s (850kB/s), 830KiB/s-830KiB/s (850kB/s-850kB/s), io=8320KiB (8520kB), run=10020-10020msec 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.898 00:37:04.898 real 0m11.122s 00:37:04.898 user 0m15.452s 00:37:04.898 sys 0m1.079s 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:04.898 ************************************ 00:37:04.898 END TEST fio_dif_1_default 00:37:04.898 ************************************ 00:37:04.898 10:47:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:04.898 10:47:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.898 10:47:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.898 10:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.898 ************************************ 00:37:04.898 START TEST fio_dif_1_multi_subsystems 00:37:04.898 ************************************ 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:04.898 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 bdev_null0 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 [2024-12-09 10:47:40.906091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 bdev_null1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:04.899 { 00:37:04.899 "params": { 00:37:04.899 "name": "Nvme$subsystem", 00:37:04.899 "trtype": "$TEST_TRANSPORT", 00:37:04.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:04.899 "adrfam": "ipv4", 00:37:04.899 "trsvcid": "$NVMF_PORT", 00:37:04.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:04.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:04.899 "hdgst": ${hdgst:-false}, 00:37:04.899 "ddgst": ${ddgst:-false} 00:37:04.899 }, 00:37:04.899 "method": "bdev_nvme_attach_controller" 00:37:04.899 } 00:37:04.899 EOF 00:37:04.899 )") 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:04.899 { 00:37:04.899 "params": { 00:37:04.899 "name": "Nvme$subsystem", 00:37:04.899 "trtype": "$TEST_TRANSPORT", 00:37:04.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:04.899 "adrfam": "ipv4", 00:37:04.899 "trsvcid": "$NVMF_PORT", 00:37:04.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:04.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:04.899 "hdgst": ${hdgst:-false}, 00:37:04.899 "ddgst": ${ddgst:-false} 00:37:04.899 }, 00:37:04.899 "method": "bdev_nvme_attach_controller" 00:37:04.899 } 00:37:04.899 EOF 00:37:04.899 )") 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:04.899 "params": { 00:37:04.899 "name": "Nvme0", 00:37:04.899 "trtype": "tcp", 00:37:04.899 "traddr": "10.0.0.2", 00:37:04.899 "adrfam": "ipv4", 00:37:04.899 "trsvcid": "4420", 00:37:04.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.899 "hdgst": false, 00:37:04.899 "ddgst": false 00:37:04.899 }, 00:37:04.899 "method": "bdev_nvme_attach_controller" 00:37:04.899 },{ 00:37:04.899 "params": { 00:37:04.899 "name": "Nvme1", 00:37:04.899 "trtype": "tcp", 00:37:04.899 "traddr": "10.0.0.2", 00:37:04.899 "adrfam": "ipv4", 00:37:04.899 "trsvcid": "4420", 00:37:04.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:04.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:04.899 "hdgst": false, 00:37:04.899 "ddgst": false 00:37:04.899 }, 00:37:04.899 "method": "bdev_nvme_attach_controller" 00:37:04.899 }' 00:37:04.899 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.900 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.900 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.900 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.900 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:04.900 10:47:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.900 10:47:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.900 10:47:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.900 10:47:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:04.900 10:47:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.900 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:04.900 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:04.900 fio-3.35 00:37:04.900 Starting 2 threads 00:37:14.886 00:37:14.886 filename0: (groupid=0, jobs=1): err= 0: pid=2918475: Mon Dec 9 10:47:52 2024 00:37:14.886 read: IOPS=189, BW=756KiB/s (774kB/s)(7584KiB/10031msec) 00:37:14.886 slat (nsec): min=5982, max=67552, avg=8612.71, stdev=5557.01 00:37:14.886 clat (usec): min=428, max=42571, avg=21135.70, stdev=20470.08 00:37:14.886 lat (usec): min=434, max=42597, avg=21144.31, stdev=20468.42 00:37:14.886 clat percentiles (usec): 00:37:14.886 | 1.00th=[ 461], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 611], 00:37:14.886 | 30.00th=[ 627], 40.00th=[ 635], 50.00th=[41157], 60.00th=[41157], 00:37:14.886 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:37:14.886 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:14.886 | 99.99th=[42730] 00:37:14.886 bw ( KiB/s): min= 672, max= 768, per=66.01%, avg=756.80, stdev=23.85, samples=20 00:37:14.886 iops : min= 168, max= 192, avg=189.20, stdev= 5.96, samples=20 00:37:14.886 lat (usec) : 500=10.18%, 750=39.56%, 1000=0.05% 00:37:14.886 lat (msec) : 50=50.21% 00:37:14.886 cpu : usr=98.12%, sys=1.60%, ctx=29, majf=0, minf=47 00:37:14.886 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.886 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.886 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:14.886 filename1: (groupid=0, jobs=1): err= 0: pid=2918476: Mon Dec 9 10:47:52 2024 00:37:14.886 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:37:14.886 slat (nsec): min=6018, max=39723, avg=11077.79, stdev=8430.40 00:37:14.886 clat (usec): min=410, max=42587, avg=40979.35, stdev=3719.96 00:37:14.886 lat (usec): min=417, max=42619, avg=40990.43, stdev=3719.69 00:37:14.886 clat percentiles (usec): 00:37:14.886 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:14.886 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:14.886 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:14.886 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:14.886 | 99.99th=[42730] 00:37:14.886 bw ( KiB/s): min= 352, max= 416, per=33.88%, avg=388.80, stdev=15.66, samples=20 00:37:14.886 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:37:14.886 lat (usec) : 500=0.82% 00:37:14.886 lat (msec) : 50=99.18% 00:37:14.886 cpu : usr=98.27%, sys=1.46%, ctx=14, majf=0, minf=79 00:37:14.886 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.886 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.886 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:14.886 00:37:14.886 Run status group 0 (all jobs): 00:37:14.886 READ: bw=1145KiB/s (1173kB/s), 390KiB/s-756KiB/s (399kB/s-774kB/s), io=11.2MiB (11.8MB), run=10008-10031msec 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.886 00:37:14.886 real 0m11.606s 00:37:14.886 user 0m27.003s 00:37:14.886 sys 0m0.701s 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 ************************************ 00:37:14.886 END TEST fio_dif_1_multi_subsystems 00:37:14.886 ************************************ 00:37:14.886 10:47:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:14.886 10:47:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:14.886 10:47:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.886 10:47:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 ************************************ 00:37:14.886 START TEST fio_dif_rand_params 00:37:14.886 ************************************ 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:14.886 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.887 bdev_null0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.887 [2024-12-09 10:47:52.584257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:14.887 { 00:37:14.887 "params": { 00:37:14.887 "name": "Nvme$subsystem", 00:37:14.887 "trtype": "$TEST_TRANSPORT", 00:37:14.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.887 "adrfam": "ipv4", 00:37:14.887 "trsvcid": "$NVMF_PORT", 00:37:14.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.887 "hdgst": ${hdgst:-false}, 00:37:14.887 "ddgst": ${ddgst:-false} 00:37:14.887 }, 00:37:14.887 "method": "bdev_nvme_attach_controller" 00:37:14.887 } 00:37:14.887 EOF 00:37:14.887 )") 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:14.887 10:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:14.887 "params": { 00:37:14.887 "name": "Nvme0", 00:37:14.887 "trtype": "tcp", 00:37:14.887 "traddr": "10.0.0.2", 00:37:14.887 "adrfam": "ipv4", 00:37:14.887 "trsvcid": "4420", 00:37:14.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.887 "hdgst": false, 00:37:14.887 "ddgst": false 00:37:14.887 }, 00:37:14.887 "method": "bdev_nvme_attach_controller" 00:37:14.887 }' 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:15.159 10:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.427 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:15.427 ... 00:37:15.427 fio-3.35 00:37:15.427 Starting 3 threads 00:37:22.000 00:37:22.000 filename0: (groupid=0, jobs=1): err= 0: pid=2920407: Mon Dec 9 10:47:58 2024 00:37:22.000 read: IOPS=337, BW=42.1MiB/s (44.2MB/s)(211MiB/5003msec) 00:37:22.000 slat (nsec): min=6116, max=28674, avg=11121.39, stdev=2053.25 00:37:22.000 clat (usec): min=5254, max=49362, avg=8882.80, stdev=2642.44 00:37:22.000 lat (usec): min=5261, max=49374, avg=8893.93, stdev=2642.49 00:37:22.000 clat percentiles (usec): 00:37:22.000 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 7832], 00:37:22.000 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:37:22.000 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10683], 00:37:22.000 | 99.00th=[11600], 99.50th=[12256], 99.90th=[49546], 99.95th=[49546], 00:37:22.000 | 99.99th=[49546] 00:37:22.000 bw ( KiB/s): min=39680, max=45824, per=35.96%, avg=43150.22, stdev=1824.22, samples=9 00:37:22.001 iops : min= 310, max= 358, avg=337.11, stdev=14.25, samples=9 00:37:22.001 lat (msec) : 10=85.60%, 20=14.05%, 50=0.36% 00:37:22.001 cpu : usr=94.66%, sys=5.04%, ctx=6, majf=0, minf=53 00:37:22.001 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 issued rwts: total=1687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.001 filename0: (groupid=0, jobs=1): err= 0: pid=2920408: Mon Dec 9 10:47:58 2024 00:37:22.001 read: IOPS=316, BW=39.6MiB/s (41.5MB/s)(198MiB/5002msec) 00:37:22.001 slat (nsec): min=6196, max=28154, avg=11376.34, stdev=1975.34 00:37:22.001 clat (usec): min=3183, max=52185, avg=9453.67, stdev=2310.88 00:37:22.001 lat (usec): min=3189, max=52196, avg=9465.05, stdev=2310.93 00:37:22.001 clat percentiles (usec): 00:37:22.001 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 8356], 00:37:22.001 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:37:22.001 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:37:22.001 | 99.00th=[12780], 99.50th=[13304], 99.90th=[51643], 99.95th=[52167], 00:37:22.001 | 99.99th=[52167] 00:37:22.001 bw ( KiB/s): min=36608, max=44544, per=33.78%, avg=40533.33, stdev=2498.46, samples=9 00:37:22.001 iops : min= 286, max= 348, avg=316.67, stdev=19.52, samples=9 00:37:22.001 lat (msec) : 4=0.06%, 10=67.57%, 20=32.18%, 100=0.19% 00:37:22.001 cpu : usr=94.56%, sys=5.12%, ctx=7, majf=0, minf=56 00:37:22.001 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 issued rwts: total=1585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.001 filename0: (groupid=0, jobs=1): err= 0: pid=2920409: Mon Dec 9 10:47:58 2024 00:37:22.001 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(178MiB/5009msec) 00:37:22.001 slat (nsec): min=6135, max=25771, avg=11429.10, stdev=1811.42 00:37:22.001 clat (usec): min=5843, max=52196, avg=10538.47, stdev=4692.15 00:37:22.001 lat (usec): min=5856, max=52222, avg=10549.90, stdev=4692.18 00:37:22.001 clat percentiles (usec): 00:37:22.001 | 1.00th=[ 6390], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 8848], 00:37:22.001 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10421], 00:37:22.001 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11863], 95.00th=[12387], 00:37:22.001 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:37:22.001 | 99.99th=[52167] 00:37:22.001 bw ( KiB/s): min=27392, max=38656, per=30.31%, avg=36377.60, stdev=3542.00, samples=10 00:37:22.001 iops : min= 214, max= 302, avg=284.20, stdev=27.67, samples=10 00:37:22.001 lat (msec) : 10=49.02%, 20=49.72%, 50=0.63%, 100=0.63% 00:37:22.001 cpu : usr=94.53%, sys=5.15%, ctx=7, majf=0, minf=39 00:37:22.001 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.001 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.001 00:37:22.001 Run status group 0 (all jobs): 00:37:22.001 READ: bw=117MiB/s (123MB/s), 35.5MiB/s-42.1MiB/s (37.3MB/s-44.2MB/s), io=587MiB (616MB), run=5002-5009msec 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 bdev_null0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 [2024-12-09 10:47:58.667285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.001 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 bdev_null1 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 bdev_null2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.002 { 00:37:22.002 "params": { 00:37:22.002 "name": "Nvme$subsystem", 00:37:22.002 "trtype": "$TEST_TRANSPORT", 00:37:22.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.002 "adrfam": "ipv4", 00:37:22.002 "trsvcid": "$NVMF_PORT", 00:37:22.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.002 "hdgst": ${hdgst:-false}, 00:37:22.002 "ddgst": ${ddgst:-false} 00:37:22.002 }, 00:37:22.002 "method": "bdev_nvme_attach_controller" 00:37:22.002 } 00:37:22.002 EOF 00:37:22.002 )") 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.002 { 00:37:22.002 "params": { 00:37:22.002 "name": "Nvme$subsystem", 00:37:22.002 "trtype": "$TEST_TRANSPORT", 00:37:22.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.002 "adrfam": "ipv4", 00:37:22.002 "trsvcid": "$NVMF_PORT", 00:37:22.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.002 "hdgst": ${hdgst:-false}, 00:37:22.002 "ddgst": ${ddgst:-false} 00:37:22.002 }, 00:37:22.002 "method": "bdev_nvme_attach_controller" 00:37:22.002 } 00:37:22.002 EOF 00:37:22.002 )") 00:37:22.002 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.003 { 00:37:22.003 "params": { 00:37:22.003 "name": "Nvme$subsystem", 00:37:22.003 "trtype": "$TEST_TRANSPORT", 00:37:22.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.003 "adrfam": "ipv4", 00:37:22.003 "trsvcid": "$NVMF_PORT", 00:37:22.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.003 "hdgst": ${hdgst:-false}, 00:37:22.003 "ddgst": ${ddgst:-false} 00:37:22.003 }, 00:37:22.003 "method": "bdev_nvme_attach_controller" 00:37:22.003 } 00:37:22.003 EOF 00:37:22.003 )") 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:22.003 "params": { 00:37:22.003 "name": "Nvme0", 00:37:22.003 "trtype": "tcp", 00:37:22.003 "traddr": "10.0.0.2", 00:37:22.003 "adrfam": "ipv4", 00:37:22.003 "trsvcid": "4420", 00:37:22.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.003 "hdgst": false, 00:37:22.003 "ddgst": false 00:37:22.003 }, 00:37:22.003 "method": "bdev_nvme_attach_controller" 00:37:22.003 },{ 00:37:22.003 "params": { 00:37:22.003 "name": "Nvme1", 00:37:22.003 "trtype": "tcp", 00:37:22.003 "traddr": "10.0.0.2", 00:37:22.003 "adrfam": "ipv4", 00:37:22.003 "trsvcid": "4420", 00:37:22.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.003 "hdgst": false, 00:37:22.003 "ddgst": false 00:37:22.003 }, 00:37:22.003 "method": "bdev_nvme_attach_controller" 00:37:22.003 },{ 00:37:22.003 "params": { 00:37:22.003 "name": "Nvme2", 00:37:22.003 "trtype": "tcp", 00:37:22.003 "traddr": "10.0.0.2", 00:37:22.003 "adrfam": "ipv4", 00:37:22.003 "trsvcid": "4420", 00:37:22.003 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:22.003 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:22.003 "hdgst": false, 00:37:22.003 "ddgst": false 00:37:22.003 }, 00:37:22.003 "method": "bdev_nvme_attach_controller" 00:37:22.003 }' 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.003 10:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.003 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.003 ... 00:37:22.003 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.003 ... 00:37:22.003 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.003 ... 00:37:22.003 fio-3.35 00:37:22.003 Starting 24 threads 00:37:34.219 00:37:34.219 filename0: (groupid=0, jobs=1): err= 0: pid=2921486: Mon Dec 9 10:48:10 2024 00:37:34.219 read: IOPS=594, BW=2378KiB/s (2435kB/s)(23.2MiB/10006msec) 00:37:34.219 slat (nsec): min=6357, max=74062, avg=23673.76, stdev=14152.39 00:37:34.219 clat (usec): min=7392, max=35496, avg=26729.34, stdev=2651.55 00:37:34.219 lat (usec): min=7406, max=35504, avg=26753.02, stdev=2652.80 00:37:34.219 clat percentiles (usec): 00:37:34.219 | 1.00th=[14615], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.219 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:37:34.219 | 70.00th=[27919], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278], 00:37:34.219 | 99.00th=[30802], 99.50th=[31065], 99.90th=[32375], 99.95th=[35390], 00:37:34.219 | 99.99th=[35390] 00:37:34.219 bw ( KiB/s): min= 2176, max= 2784, per=4.21%, avg=2382.84, stdev=161.46, samples=19 00:37:34.219 iops : min= 544, max= 696, avg=595.63, stdev=40.34, samples=19 00:37:34.219 lat (msec) : 10=0.24%, 20=2.29%, 50=97.48% 00:37:34.219 cpu : usr=98.82%, sys=0.78%, ctx=11, majf=0, minf=27 00:37:34.219 IO depths : 1=6.0%, 2=12.1%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:34.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 issued rwts: total=5948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.219 filename0: (groupid=0, jobs=1): err= 0: pid=2921487: Mon Dec 9 10:48:10 2024 00:37:34.219 read: IOPS=587, BW=2350KiB/s (2406kB/s)(23.0MiB/10023msec) 00:37:34.219 slat (nsec): min=7263, max=92063, avg=44106.08, stdev=19848.60 00:37:34.219 clat (usec): min=15542, max=31106, avg=26860.65, stdev=1804.99 00:37:34.219 lat (usec): min=15560, max=31124, avg=26904.75, stdev=1804.45 00:37:34.219 clat percentiles (usec): 00:37:34.219 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.219 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:37:34.219 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278], 00:37:34.219 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:37:34.219 | 99.99th=[31065] 00:37:34.219 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2350.05, stdev=114.02, samples=19 00:37:34.219 iops : min= 542, max= 640, avg=587.37, stdev=28.46, samples=19 00:37:34.219 lat (msec) : 20=0.27%, 50=99.73% 00:37:34.219 cpu : usr=98.19%, sys=1.17%, ctx=55, majf=0, minf=32 00:37:34.219 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.219 filename0: (groupid=0, jobs=1): err= 0: pid=2921488: Mon Dec 9 10:48:10 2024 00:37:34.219 read: IOPS=586, BW=2348KiB/s (2404kB/s)(22.9MiB/10005msec) 00:37:34.219 slat (nsec): min=6693, max=83550, avg=33086.06, stdev=16949.26 00:37:34.219 clat (usec): min=17670, max=32370, avg=26964.06, stdev=1753.19 00:37:34.219 lat (usec): min=17678, max=32426, avg=26997.14, stdev=1753.92 00:37:34.219 clat percentiles (usec): 00:37:34.219 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.219 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:37:34.219 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278], 00:37:34.219 | 99.00th=[30540], 99.50th=[31065], 99.90th=[32113], 99.95th=[32375], 00:37:34.219 | 99.99th=[32375] 00:37:34.219 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2350.58, stdev=135.93, samples=19 00:37:34.219 iops : min= 544, max= 640, avg=587.58, stdev=33.96, samples=19 00:37:34.219 lat (msec) : 20=0.27%, 50=99.73% 00:37:34.219 cpu : usr=98.27%, sys=1.19%, ctx=39, majf=0, minf=33 00:37:34.219 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.219 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.219 filename0: (groupid=0, jobs=1): err= 0: pid=2921489: Mon Dec 9 10:48:10 2024 00:37:34.219 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10008msec) 00:37:34.219 slat (usec): min=4, max=105, avg=51.56, stdev=22.45 00:37:34.219 clat (usec): min=9112, max=43108, avg=26781.25, stdev=2161.79 00:37:34.219 lat (usec): min=9126, max=43124, avg=26832.81, stdev=2163.50 00:37:34.219 clat percentiles (usec): 00:37:34.219 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.219 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.219 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.219 | 99.00th=[30802], 99.50th=[31065], 99.90th=[43254], 99.95th=[43254], 00:37:34.219 | 99.99th=[43254] 00:37:34.219 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2344.63, stdev=154.09, samples=19 00:37:34.219 iops : min= 512, max= 640, avg=586.16, stdev=38.52, samples=19 00:37:34.219 lat (msec) : 10=0.27%, 20=0.36%, 50=99.37% 00:37:34.219 cpu : usr=98.39%, sys=0.95%, ctx=60, majf=0, minf=22 00:37:34.219 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename0: (groupid=0, jobs=1): err= 0: pid=2921490: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=586, BW=2345KiB/s (2401kB/s)(22.9MiB/10014msec) 00:37:34.220 slat (nsec): min=4278, max=92049, avg=45961.54, stdev=19443.93 00:37:34.220 clat (usec): min=13087, max=44427, avg=26859.73, stdev=2044.22 00:37:34.220 lat (usec): min=13106, max=44441, avg=26905.69, stdev=2043.80 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.220 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.220 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.220 | 99.00th=[30802], 99.50th=[31065], 99.90th=[44303], 99.95th=[44303], 00:37:34.220 | 99.99th=[44303] 00:37:34.220 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2344.16, stdev=148.21, samples=19 00:37:34.220 iops : min= 512, max= 640, avg=586.00, stdev=37.06, samples=19 00:37:34.220 lat (msec) : 20=0.24%, 50=99.76% 00:37:34.220 cpu : usr=98.66%, sys=0.84%, ctx=53, majf=0, minf=34 00:37:34.220 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename0: (groupid=0, jobs=1): err= 0: pid=2921491: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=589, BW=2359KiB/s (2415kB/s)(23.1MiB/10012msec) 00:37:34.220 slat (usec): min=7, max=108, avg=35.52, stdev=24.14 00:37:34.220 clat (usec): min=7724, max=38087, avg=26870.71, stdev=2270.99 00:37:34.220 lat (usec): min=7734, max=38100, avg=26906.22, stdev=2270.67 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[16319], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.220 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.220 | 70.00th=[27657], 80.00th=[28705], 90.00th=[29492], 95.00th=[30278], 00:37:34.220 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:34.220 | 99.99th=[38011] 00:37:34.220 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2363.84, stdev=115.96, samples=19 00:37:34.220 iops : min= 542, max= 640, avg=590.84, stdev=29.02, samples=19 00:37:34.220 lat (msec) : 10=0.27%, 20=0.81%, 50=98.92% 00:37:34.220 cpu : usr=97.91%, sys=1.32%, ctx=140, majf=0, minf=30 00:37:34.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename0: (groupid=0, jobs=1): err= 0: pid=2921492: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=597, BW=2390KiB/s (2448kB/s)(23.3MiB/10001msec) 00:37:34.220 slat (nsec): min=3781, max=93742, avg=34023.65, stdev=17917.35 00:37:34.220 clat (usec): min=10565, max=46593, avg=26495.02, stdev=3865.24 00:37:34.220 lat (usec): min=10577, max=46636, avg=26529.05, stdev=3868.91 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[15926], 5.00th=[18482], 10.00th=[22152], 20.00th=[25035], 00:37:34.220 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.220 | 70.00th=[27919], 80.00th=[28705], 90.00th=[30016], 95.00th=[30802], 00:37:34.220 | 99.00th=[40109], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:37:34.220 | 99.99th=[46400] 00:37:34.220 bw ( KiB/s): min= 2144, max= 2992, per=4.23%, avg=2394.68, stdev=214.49, samples=19 00:37:34.220 iops : min= 536, max= 748, avg=598.63, stdev=53.67, samples=19 00:37:34.220 lat (msec) : 20=5.79%, 50=94.21% 00:37:34.220 cpu : usr=98.26%, sys=1.06%, ctx=186, majf=0, minf=36 00:37:34.220 IO depths : 1=3.5%, 2=8.5%, 4=21.0%, 8=57.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename0: (groupid=0, jobs=1): err= 0: pid=2921493: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10009msec) 00:37:34.220 slat (usec): min=4, max=119, avg=39.69, stdev=15.87 00:37:34.220 clat (usec): min=22585, max=31110, avg=26906.02, stdev=1708.48 00:37:34.220 lat (usec): min=22630, max=31161, avg=26945.71, stdev=1709.23 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25297], 00:37:34.220 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.220 | 70.00th=[27919], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:37:34.220 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:37:34.220 | 99.99th=[31065] 00:37:34.220 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2350.89, stdev=122.66, samples=19 00:37:34.220 iops : min= 542, max= 640, avg=587.68, stdev=30.73, samples=19 00:37:34.220 lat (msec) : 50=100.00% 00:37:34.220 cpu : usr=98.73%, sys=0.84%, ctx=29, majf=0, minf=36 00:37:34.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename1: (groupid=0, jobs=1): err= 0: pid=2921494: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=589, BW=2359KiB/s (2415kB/s)(23.1MiB/10012msec) 00:37:34.220 slat (usec): min=6, max=105, avg=38.58, stdev=22.18 00:37:34.220 clat (usec): min=7746, max=31273, avg=26830.18, stdev=2272.15 00:37:34.220 lat (usec): min=7781, max=31293, avg=26868.76, stdev=2271.84 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[16319], 5.00th=[24773], 10.00th=[25035], 20.00th=[25560], 00:37:34.220 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.220 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278], 00:37:34.220 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:34.220 | 99.99th=[31327] 00:37:34.220 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2363.84, stdev=115.96, samples=19 00:37:34.220 iops : min= 542, max= 640, avg=590.84, stdev=29.02, samples=19 00:37:34.220 lat (msec) : 10=0.27%, 20=0.81%, 50=98.92% 00:37:34.220 cpu : usr=98.42%, sys=1.03%, ctx=60, majf=0, minf=30 00:37:34.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename1: (groupid=0, jobs=1): err= 0: pid=2921495: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=589, BW=2357KiB/s (2413kB/s)(23.1MiB/10021msec) 00:37:34.220 slat (nsec): min=4098, max=91655, avg=33218.89, stdev=21312.12 00:37:34.220 clat (usec): min=7783, max=31150, avg=26897.99, stdev=2207.93 00:37:34.220 lat (usec): min=7803, max=31172, avg=26931.21, stdev=2204.48 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.220 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:37:34.220 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278], 00:37:34.220 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:37:34.220 | 99.99th=[31065] 00:37:34.220 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2354.40, stdev=112.65, samples=20 00:37:34.220 iops : min= 542, max= 640, avg=588.50, stdev=28.16, samples=20 00:37:34.220 lat (msec) : 10=0.30%, 20=0.51%, 50=99.19% 00:37:34.220 cpu : usr=98.45%, sys=1.00%, ctx=76, majf=0, minf=46 00:37:34.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename1: (groupid=0, jobs=1): err= 0: pid=2921497: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10009msec) 00:37:34.220 slat (usec): min=7, max=103, avg=51.12, stdev=22.70 00:37:34.220 clat (usec): min=9146, max=51757, avg=26778.47, stdev=2154.16 00:37:34.220 lat (usec): min=9164, max=51773, avg=26829.60, stdev=2156.10 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.220 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.220 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.220 | 99.00th=[30802], 99.50th=[31065], 99.90th=[43254], 99.95th=[43254], 00:37:34.220 | 99.99th=[51643] 00:37:34.220 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2344.63, stdev=154.09, samples=19 00:37:34.220 iops : min= 512, max= 640, avg=586.16, stdev=38.52, samples=19 00:37:34.220 lat (msec) : 10=0.27%, 20=0.27%, 50=99.42%, 100=0.03% 00:37:34.220 cpu : usr=98.76%, sys=0.78%, ctx=36, majf=0, minf=27 00:37:34.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.220 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.220 filename1: (groupid=0, jobs=1): err= 0: pid=2921498: Mon Dec 9 10:48:10 2024 00:37:34.220 read: IOPS=586, BW=2346KiB/s (2403kB/s)(22.9MiB/10010msec) 00:37:34.220 slat (nsec): min=5064, max=87216, avg=40545.47, stdev=16865.80 00:37:34.220 clat (usec): min=12665, max=44518, avg=26893.19, stdev=2001.75 00:37:34.220 lat (usec): min=12683, max=44533, avg=26933.73, stdev=2001.67 00:37:34.220 clat percentiles (usec): 00:37:34.220 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.221 | 70.00th=[27919], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[40633], 99.95th=[40633], 00:37:34.221 | 99.99th=[44303] 00:37:34.221 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2344.16, stdev=113.41, samples=19 00:37:34.221 iops : min= 544, max= 640, avg=586.00, stdev=28.37, samples=19 00:37:34.221 lat (msec) : 20=0.27%, 50=99.73% 00:37:34.221 cpu : usr=98.45%, sys=0.98%, ctx=65, majf=0, minf=35 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename1: (groupid=0, jobs=1): err= 0: pid=2921499: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=589, BW=2359KiB/s (2415kB/s)(23.1MiB/10012msec) 00:37:34.221 slat (usec): min=7, max=122, avg=47.64, stdev=20.36 00:37:34.221 clat (usec): min=7753, max=31324, avg=26746.03, stdev=2244.78 00:37:34.221 lat (usec): min=7772, max=31348, avg=26793.67, stdev=2246.55 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[16319], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:37:34.221 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:34.221 | 99.99th=[31327] 00:37:34.221 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2363.84, stdev=115.96, samples=19 00:37:34.221 iops : min= 542, max= 640, avg=590.84, stdev=29.02, samples=19 00:37:34.221 lat (msec) : 10=0.24%, 20=0.88%, 50=98.88% 00:37:34.221 cpu : usr=98.22%, sys=1.22%, ctx=63, majf=0, minf=35 00:37:34.221 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename1: (groupid=0, jobs=1): err= 0: pid=2921500: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=587, BW=2350KiB/s (2406kB/s)(23.0MiB/10022msec) 00:37:34.221 slat (usec): min=6, max=235, avg=36.69, stdev=14.88 00:37:34.221 clat (usec): min=15525, max=33904, avg=26929.11, stdev=1809.98 00:37:34.221 lat (usec): min=15538, max=33938, avg=26965.80, stdev=1810.49 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.221 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29492], 95.00th=[30278], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:37:34.221 | 99.99th=[33817] 00:37:34.221 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2350.05, stdev=114.02, samples=19 00:37:34.221 iops : min= 542, max= 640, avg=587.37, stdev=28.46, samples=19 00:37:34.221 lat (msec) : 20=0.27%, 50=99.73% 00:37:34.221 cpu : usr=98.64%, sys=0.94%, ctx=36, majf=0, minf=41 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename1: (groupid=0, jobs=1): err= 0: pid=2921501: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10008msec) 00:37:34.221 slat (nsec): min=4281, max=89060, avg=46304.17, stdev=16534.16 00:37:34.221 clat (usec): min=9244, max=43399, avg=26875.52, stdev=2126.06 00:37:34.221 lat (usec): min=9256, max=43419, avg=26921.82, stdev=2127.45 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[24249], 5.00th=[24773], 10.00th=[24773], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.221 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[43254], 99.95th=[43254], 00:37:34.221 | 99.99th=[43254] 00:37:34.221 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2344.63, stdev=154.09, samples=19 00:37:34.221 iops : min= 512, max= 640, avg=586.16, stdev=38.52, samples=19 00:37:34.221 lat (msec) : 10=0.27%, 20=0.31%, 50=99.42% 00:37:34.221 cpu : usr=98.71%, sys=0.81%, ctx=30, majf=0, minf=30 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename1: (groupid=0, jobs=1): err= 0: pid=2921502: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10009msec) 00:37:34.221 slat (nsec): min=5098, max=99963, avg=50233.35, stdev=19782.48 00:37:34.221 clat (usec): min=9122, max=51952, avg=26820.03, stdev=2154.80 00:37:34.221 lat (usec): min=9138, max=51966, avg=26870.27, stdev=2156.28 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.221 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[43254], 99.95th=[43254], 00:37:34.221 | 99.99th=[52167] 00:37:34.221 bw ( KiB/s): min= 2052, max= 2560, per=4.14%, avg=2344.37, stdev=141.47, samples=19 00:37:34.221 iops : min= 513, max= 640, avg=586.05, stdev=35.38, samples=19 00:37:34.221 lat (msec) : 10=0.27%, 20=0.26%, 50=99.44%, 100=0.03% 00:37:34.221 cpu : usr=98.23%, sys=1.15%, ctx=82, majf=0, minf=22 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename2: (groupid=0, jobs=1): err= 0: pid=2921503: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.0MiB/10018msec) 00:37:34.221 slat (usec): min=11, max=102, avg=52.75, stdev=20.96 00:37:34.221 clat (usec): min=16038, max=31317, avg=26758.74, stdev=1835.92 00:37:34.221 lat (usec): min=16049, max=31338, avg=26811.50, stdev=1837.96 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.221 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.221 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:37:34.221 | 99.99th=[31327] 00:37:34.221 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2350.63, stdev=114.19, samples=19 00:37:34.221 iops : min= 544, max= 640, avg=587.58, stdev=28.49, samples=19 00:37:34.221 lat (msec) : 20=0.48%, 50=99.52% 00:37:34.221 cpu : usr=98.81%, sys=0.77%, ctx=16, majf=0, minf=29 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename2: (groupid=0, jobs=1): err= 0: pid=2921504: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=586, BW=2345KiB/s (2401kB/s)(22.9MiB/10018msec) 00:37:34.221 slat (nsec): min=6525, max=82487, avg=32498.85, stdev=15327.68 00:37:34.221 clat (usec): min=15395, max=36432, avg=26996.00, stdev=1840.08 00:37:34.221 lat (usec): min=15436, max=36459, avg=27028.49, stdev=1840.63 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.221 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:37:34.221 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278], 00:37:34.221 | 99.00th=[30802], 99.50th=[31851], 99.90th=[36439], 99.95th=[36439], 00:37:34.221 | 99.99th=[36439] 00:37:34.221 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2344.16, stdev=120.88, samples=19 00:37:34.221 iops : min= 544, max= 640, avg=586.00, stdev=30.19, samples=19 00:37:34.221 lat (msec) : 20=0.24%, 50=99.76% 00:37:34.221 cpu : usr=98.60%, sys=0.92%, ctx=33, majf=0, minf=34 00:37:34.221 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.221 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.221 filename2: (groupid=0, jobs=1): err= 0: pid=2921505: Mon Dec 9 10:48:10 2024 00:37:34.221 read: IOPS=587, BW=2350KiB/s (2406kB/s)(23.0MiB/10022msec) 00:37:34.221 slat (nsec): min=6583, max=89427, avg=34903.11, stdev=19525.68 00:37:34.221 clat (usec): min=13921, max=31147, avg=26969.08, stdev=1820.36 00:37:34.221 lat (usec): min=13931, max=31180, avg=27003.99, stdev=1817.84 00:37:34.221 clat percentiles (usec): 00:37:34.221 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.221 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:37:34.221 | 70.00th=[27919], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278], 00:37:34.221 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:37:34.221 | 99.99th=[31065] 00:37:34.221 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2350.26, stdev=113.93, samples=19 00:37:34.221 iops : min= 542, max= 640, avg=587.42, stdev=28.44, samples=19 00:37:34.221 lat (msec) : 20=0.27%, 50=99.73% 00:37:34.221 cpu : usr=98.43%, sys=1.04%, ctx=69, majf=0, minf=41 00:37:34.222 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 filename2: (groupid=0, jobs=1): err= 0: pid=2921506: Mon Dec 9 10:48:10 2024 00:37:34.222 read: IOPS=629, BW=2518KiB/s (2578kB/s)(24.6MiB/10021msec) 00:37:34.222 slat (usec): min=4, max=102, avg=20.40, stdev=18.66 00:37:34.222 clat (usec): min=9326, max=41667, avg=25320.76, stdev=4719.50 00:37:34.222 lat (usec): min=9335, max=41674, avg=25341.16, stdev=4721.53 00:37:34.222 clat percentiles (usec): 00:37:34.222 | 1.00th=[15139], 5.00th=[16909], 10.00th=[18482], 20.00th=[21103], 00:37:34.222 | 30.00th=[23725], 40.00th=[25297], 50.00th=[26084], 60.00th=[26608], 00:37:34.222 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[32900], 00:37:34.222 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[41681], 00:37:34.222 | 99.99th=[41681] 00:37:34.222 bw ( KiB/s): min= 2192, max= 3456, per=4.47%, avg=2533.00, stdev=268.48, samples=19 00:37:34.222 iops : min= 548, max= 864, avg=633.21, stdev=67.15, samples=19 00:37:34.222 lat (msec) : 10=0.13%, 20=15.54%, 50=84.34% 00:37:34.222 cpu : usr=98.45%, sys=1.05%, ctx=59, majf=0, minf=32 00:37:34.222 IO depths : 1=0.1%, 2=0.5%, 4=4.0%, 8=79.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=89.4%, 8=8.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=6308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 filename2: (groupid=0, jobs=1): err= 0: pid=2921507: Mon Dec 9 10:48:10 2024 00:37:34.222 read: IOPS=601, BW=2408KiB/s (2466kB/s)(23.5MiB/10007msec) 00:37:34.222 slat (usec): min=5, max=249, avg=36.83, stdev=23.28 00:37:34.222 clat (usec): min=10228, max=47648, avg=26278.21, stdev=4481.26 00:37:34.222 lat (usec): min=10238, max=47717, avg=26315.04, stdev=4487.71 00:37:34.222 clat percentiles (usec): 00:37:34.222 | 1.00th=[16319], 5.00th=[17695], 10.00th=[20055], 20.00th=[24511], 00:37:34.222 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870], 00:37:34.222 | 70.00th=[27919], 80.00th=[28705], 90.00th=[30016], 95.00th=[33424], 00:37:34.222 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:37:34.222 | 99.99th=[47449] 00:37:34.222 bw ( KiB/s): min= 2048, max= 2928, per=4.26%, avg=2414.53, stdev=237.70, samples=19 00:37:34.222 iops : min= 512, max= 732, avg=603.63, stdev=59.43, samples=19 00:37:34.222 lat (msec) : 20=10.01%, 50=89.99% 00:37:34.222 cpu : usr=98.67%, sys=0.86%, ctx=56, majf=0, minf=38 00:37:34.222 IO depths : 1=2.0%, 2=6.3%, 4=18.5%, 8=62.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=6024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 filename2: (groupid=0, jobs=1): err= 0: pid=2921508: Mon Dec 9 10:48:10 2024 00:37:34.222 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10009msec) 00:37:34.222 slat (usec): min=7, max=105, avg=50.35, stdev=23.57 00:37:34.222 clat (usec): min=9126, max=43415, avg=26772.71, stdev=2124.00 00:37:34.222 lat (usec): min=9141, max=43429, avg=26823.06, stdev=2126.17 00:37:34.222 clat percentiles (usec): 00:37:34.222 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.222 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:37:34.222 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.222 | 99.00th=[30802], 99.50th=[31065], 99.90th=[43254], 99.95th=[43254], 00:37:34.222 | 99.99th=[43254] 00:37:34.222 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2344.63, stdev=154.09, samples=19 00:37:34.222 iops : min= 512, max= 640, avg=586.16, stdev=38.52, samples=19 00:37:34.222 lat (msec) : 10=0.27%, 20=0.22%, 50=99.51% 00:37:34.222 cpu : usr=97.97%, sys=1.17%, ctx=226, majf=0, minf=27 00:37:34.222 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 filename2: (groupid=0, jobs=1): err= 0: pid=2921509: Mon Dec 9 10:48:10 2024 00:37:34.222 read: IOPS=589, BW=2359KiB/s (2415kB/s)(23.1MiB/10012msec) 00:37:34.222 slat (usec): min=7, max=108, avg=49.64, stdev=22.46 00:37:34.222 clat (usec): min=6539, max=31442, avg=26729.01, stdev=2260.56 00:37:34.222 lat (usec): min=6547, max=31470, avg=26778.64, stdev=2262.21 00:37:34.222 clat percentiles (usec): 00:37:34.222 | 1.00th=[16319], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:37:34.222 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:37:34.222 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016], 00:37:34.222 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:37:34.222 | 99.99th=[31327] 00:37:34.222 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2363.84, stdev=115.96, samples=19 00:37:34.222 iops : min= 542, max= 640, avg=590.84, stdev=29.02, samples=19 00:37:34.222 lat (msec) : 10=0.30%, 20=0.78%, 50=98.92% 00:37:34.222 cpu : usr=97.57%, sys=1.42%, ctx=161, majf=0, minf=37 00:37:34.222 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=5904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 filename2: (groupid=0, jobs=1): err= 0: pid=2921510: Mon Dec 9 10:48:10 2024 00:37:34.222 read: IOPS=585, BW=2342KiB/s (2398kB/s)(22.9MiB/10002msec) 00:37:34.222 slat (nsec): min=6927, max=91540, avg=35340.50, stdev=17410.99 00:37:34.222 clat (usec): min=14440, max=52247, avg=26980.57, stdev=2298.38 00:37:34.222 lat (usec): min=14450, max=52287, avg=27015.91, stdev=2300.09 00:37:34.222 clat percentiles (usec): 00:37:34.222 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:34.222 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:37:34.222 | 70.00th=[27657], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278], 00:37:34.222 | 99.00th=[31065], 99.50th=[32375], 99.90th=[52167], 99.95th=[52167], 00:37:34.222 | 99.99th=[52167] 00:37:34.222 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2343.79, stdev=133.31, samples=19 00:37:34.222 iops : min= 512, max= 640, avg=585.95, stdev=33.33, samples=19 00:37:34.222 lat (msec) : 20=0.44%, 50=99.28%, 100=0.27% 00:37:34.222 cpu : usr=98.48%, sys=0.98%, ctx=125, majf=0, minf=31 00:37:34.222 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:34.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.222 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.222 00:37:34.222 Run status group 0 (all jobs): 00:37:34.222 READ: bw=55.3MiB/s (58.0MB/s), 2342KiB/s-2518KiB/s (2398kB/s-2578kB/s), io=554MiB (581MB), run=10001-10023msec 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:34.222 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 bdev_null0 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 [2024-12-09 10:48:10.517997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 bdev_null1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.223 { 00:37:34.223 "params": { 00:37:34.223 "name": "Nvme$subsystem", 00:37:34.223 "trtype": "$TEST_TRANSPORT", 00:37:34.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.223 "adrfam": "ipv4", 00:37:34.223 "trsvcid": "$NVMF_PORT", 00:37:34.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.223 "hdgst": ${hdgst:-false}, 00:37:34.223 "ddgst": ${ddgst:-false} 00:37:34.223 }, 00:37:34.223 "method": "bdev_nvme_attach_controller" 00:37:34.223 } 00:37:34.223 EOF 00:37:34.223 )") 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.223 { 00:37:34.223 "params": { 00:37:34.223 "name": "Nvme$subsystem", 00:37:34.223 "trtype": "$TEST_TRANSPORT", 00:37:34.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.223 "adrfam": "ipv4", 00:37:34.223 "trsvcid": "$NVMF_PORT", 00:37:34.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.223 "hdgst": ${hdgst:-false}, 00:37:34.223 "ddgst": ${ddgst:-false} 00:37:34.223 }, 00:37:34.223 "method": "bdev_nvme_attach_controller" 00:37:34.223 } 00:37:34.223 EOF 00:37:34.223 )") 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.223 "params": { 00:37:34.223 "name": "Nvme0", 00:37:34.223 "trtype": "tcp", 00:37:34.223 "traddr": "10.0.0.2", 00:37:34.223 "adrfam": "ipv4", 00:37:34.223 "trsvcid": "4420", 00:37:34.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.223 "hdgst": false, 00:37:34.223 "ddgst": false 00:37:34.223 }, 00:37:34.223 "method": "bdev_nvme_attach_controller" 00:37:34.223 },{ 00:37:34.223 "params": { 00:37:34.223 "name": "Nvme1", 00:37:34.223 "trtype": "tcp", 00:37:34.223 "traddr": "10.0.0.2", 00:37:34.223 "adrfam": "ipv4", 00:37:34.223 "trsvcid": "4420", 00:37:34.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:34.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:34.223 "hdgst": false, 00:37:34.223 "ddgst": false 00:37:34.223 }, 00:37:34.223 "method": "bdev_nvme_attach_controller" 00:37:34.223 }' 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.223 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:34.224 10:48:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.224 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:34.224 ... 00:37:34.224 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:34.224 ... 00:37:34.224 fio-3.35 00:37:34.224 Starting 4 threads 00:37:39.494 00:37:39.494 filename0: (groupid=0, jobs=1): err= 0: pid=2923965: Mon Dec 9 10:48:16 2024 00:37:39.494 read: IOPS=2824, BW=22.1MiB/s (23.1MB/s)(110MiB/5002msec) 00:37:39.494 slat (nsec): min=6055, max=43662, avg=8575.83, stdev=2917.95 00:37:39.494 clat (usec): min=713, max=5280, avg=2807.46, stdev=419.13 00:37:39.494 lat (usec): min=724, max=5292, avg=2816.04, stdev=419.01 00:37:39.494 clat percentiles (usec): 00:37:39.494 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2507], 00:37:39.494 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2966], 00:37:39.494 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3425], 00:37:39.494 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5145], 00:37:39.494 | 99.99th=[ 5276] 00:37:39.494 bw ( KiB/s): min=21488, max=23296, per=26.49%, avg=22533.56, stdev=555.75, samples=9 00:37:39.494 iops : min= 2686, max= 2912, avg=2816.67, stdev=69.45, samples=9 00:37:39.494 lat (usec) : 750=0.01%, 1000=0.01% 00:37:39.494 lat (msec) : 2=2.96%, 4=95.79%, 10=1.23% 00:37:39.494 cpu : usr=95.76%, sys=3.94%, ctx=8, majf=0, minf=9 00:37:39.494 IO depths : 1=0.2%, 2=5.2%, 4=65.4%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 issued rwts: total=14127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:39.494 filename0: (groupid=0, jobs=1): err= 0: pid=2923966: Mon Dec 9 10:48:16 2024 00:37:39.494 read: IOPS=2618, BW=20.5MiB/s (21.4MB/s)(102MiB/5002msec) 00:37:39.494 slat (nsec): min=6025, max=42732, avg=8565.87, stdev=2983.76 00:37:39.494 clat (usec): min=791, max=5709, avg=3030.07, stdev=426.43 00:37:39.494 lat (usec): min=801, max=5720, avg=3038.63, stdev=426.30 00:37:39.494 clat percentiles (usec): 00:37:39.494 | 1.00th=[ 2089], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2769], 00:37:39.494 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:37:39.494 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3752], 00:37:39.494 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5407], 00:37:39.494 | 99.99th=[ 5669] 00:37:39.494 bw ( KiB/s): min=20240, max=21552, per=24.65%, avg=20968.89, stdev=460.82, samples=9 00:37:39.494 iops : min= 2530, max= 2694, avg=2621.11, stdev=57.60, samples=9 00:37:39.494 lat (usec) : 1000=0.02% 00:37:39.494 lat (msec) : 2=0.72%, 4=95.98%, 10=3.28% 00:37:39.494 cpu : usr=96.16%, sys=3.52%, ctx=8, majf=0, minf=9 00:37:39.494 IO depths : 1=0.1%, 2=3.0%, 4=69.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 issued rwts: total=13097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:39.494 filename1: (groupid=0, jobs=1): err= 0: pid=2923967: Mon Dec 9 10:48:16 2024 00:37:39.494 read: IOPS=2671, BW=20.9MiB/s (21.9MB/s)(104MiB/5002msec) 00:37:39.494 slat (nsec): min=6052, max=40807, avg=8967.55, stdev=3057.05 00:37:39.494 clat (usec): min=996, max=5454, avg=2969.46, stdev=434.48 00:37:39.494 lat (usec): min=1002, max=5460, avg=2978.43, stdev=434.34 00:37:39.494 clat percentiles (usec): 00:37:39.494 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:37:39.494 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:37:39.494 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3720], 00:37:39.494 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[ 5211], 00:37:39.494 | 99.99th=[ 5473] 00:37:39.494 bw ( KiB/s): min=20688, max=22176, per=25.11%, avg=21363.56, stdev=605.19, samples=9 00:37:39.494 iops : min= 2586, max= 2772, avg=2670.44, stdev=75.65, samples=9 00:37:39.494 lat (usec) : 1000=0.01% 00:37:39.494 lat (msec) : 2=0.93%, 4=96.24%, 10=2.83% 00:37:39.494 cpu : usr=95.78%, sys=3.90%, ctx=8, majf=0, minf=9 00:37:39.494 IO depths : 1=0.1%, 2=3.0%, 4=67.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.494 issued rwts: total=13362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:39.494 filename1: (groupid=0, jobs=1): err= 0: pid=2923968: Mon Dec 9 10:48:16 2024 00:37:39.494 read: IOPS=2521, BW=19.7MiB/s (20.7MB/s)(98.5MiB/5001msec) 00:37:39.494 slat (nsec): min=6043, max=45455, avg=8603.08, stdev=2962.50 00:37:39.494 clat (usec): min=678, max=5806, avg=3147.59, stdev=496.53 00:37:39.494 lat (usec): min=688, max=5818, avg=3156.19, stdev=496.22 00:37:39.494 clat percentiles (usec): 00:37:39.494 | 1.00th=[ 2114], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2900], 00:37:39.494 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3097], 00:37:39.494 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4228], 00:37:39.494 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5538], 00:37:39.494 | 99.99th=[ 5800] 00:37:39.494 bw ( KiB/s): min=19094, max=20976, per=23.72%, avg=20183.78, stdev=557.91, samples=9 00:37:39.494 iops : min= 2386, max= 2622, avg=2522.89, stdev=69.92, samples=9 00:37:39.494 lat (usec) : 750=0.01%, 1000=0.01% 00:37:39.494 lat (msec) : 2=0.64%, 4=92.66%, 10=6.68% 00:37:39.494 cpu : usr=95.56%, sys=4.14%, ctx=10, majf=0, minf=9 00:37:39.495 IO depths : 1=0.1%, 2=3.5%, 4=69.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.495 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.495 issued rwts: total=12608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.495 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:39.495 00:37:39.495 Run status group 0 (all jobs): 00:37:39.495 READ: bw=83.1MiB/s (87.1MB/s), 19.7MiB/s-22.1MiB/s (20.7MB/s-23.1MB/s), io=416MiB (436MB), run=5001-5002msec 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 00:37:39.495 real 0m24.485s 00:37:39.495 user 4m51.716s 00:37:39.495 sys 0m5.074s 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 ************************************ 00:37:39.495 END TEST fio_dif_rand_params 00:37:39.495 ************************************ 00:37:39.495 10:48:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:39.495 10:48:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:39.495 10:48:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 ************************************ 00:37:39.495 START TEST fio_dif_digest 00:37:39.495 ************************************ 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 bdev_null0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:39.495 [2024-12-09 10:48:17.138655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.495 { 00:37:39.495 "params": { 00:37:39.495 "name": "Nvme$subsystem", 00:37:39.495 "trtype": "$TEST_TRANSPORT", 00:37:39.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.495 "adrfam": "ipv4", 00:37:39.495 "trsvcid": "$NVMF_PORT", 00:37:39.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.495 "hdgst": ${hdgst:-false}, 00:37:39.495 "ddgst": ${ddgst:-false} 00:37:39.495 }, 00:37:39.495 "method": "bdev_nvme_attach_controller" 00:37:39.495 } 00:37:39.495 EOF 00:37:39.495 )") 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.495 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:39.496 "params": { 00:37:39.496 "name": "Nvme0", 00:37:39.496 "trtype": "tcp", 00:37:39.496 "traddr": "10.0.0.2", 00:37:39.496 "adrfam": "ipv4", 00:37:39.496 "trsvcid": "4420", 00:37:39.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.496 "hdgst": true, 00:37:39.496 "ddgst": true 00:37:39.496 }, 00:37:39.496 "method": "bdev_nvme_attach_controller" 00:37:39.496 }' 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:39.496 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.761 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.761 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.761 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:39.761 10:48:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.018 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:40.018 ... 00:37:40.018 fio-3.35 00:37:40.019 Starting 3 threads 00:37:52.226 00:37:52.226 filename0: (groupid=0, jobs=1): err= 0: pid=2925171: Mon Dec 9 10:48:28 2024 00:37:52.226 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10048msec) 00:37:52.226 slat (nsec): min=6350, max=72417, avg=17577.30, stdev=5619.82 00:37:52.226 clat (usec): min=7039, max=51273, avg=10150.17, stdev=1278.31 00:37:52.226 lat (usec): min=7052, max=51285, avg=10167.75, stdev=1277.72 00:37:52.226 clat percentiles (usec): 00:37:52.226 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:37:52.226 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:37:52.226 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:37:52.226 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13435], 99.95th=[50070], 00:37:52.226 | 99.99th=[51119] 00:37:52.226 bw ( KiB/s): min=36096, max=38656, per=35.84%, avg=37849.60, stdev=725.39, samples=20 00:37:52.226 iops : min= 282, max= 302, avg=295.70, stdev= 5.67, samples=20 00:37:52.226 lat (msec) : 10=43.14%, 20=56.79%, 100=0.07% 00:37:52.226 cpu : usr=96.11%, sys=3.57%, ctx=18, majf=0, minf=61 00:37:52.226 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.226 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.226 filename0: (groupid=0, jobs=1): err= 0: pid=2925172: Mon Dec 9 10:48:28 2024 00:37:52.226 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10046msec) 00:37:52.226 slat (nsec): min=6340, max=52147, avg=17946.86, stdev=8566.74 00:37:52.226 clat (usec): min=6634, max=47635, avg=11139.46, stdev=1254.85 00:37:52.226 lat (usec): min=6650, max=47644, avg=11157.41, stdev=1254.67 00:37:52.226 clat percentiles (usec): 00:37:52.226 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:37:52.226 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:37:52.226 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:37:52.226 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[46400], 00:37:52.226 | 99.99th=[47449] 00:37:52.226 bw ( KiB/s): min=33792, max=35328, per=32.65%, avg=34483.20, stdev=416.12, samples=20 00:37:52.226 iops : min= 264, max= 276, avg=269.40, stdev= 3.25, samples=20 00:37:52.226 lat (msec) : 10=5.82%, 20=94.10%, 50=0.07% 00:37:52.226 cpu : usr=95.77%, sys=3.92%, ctx=16, majf=0, minf=49 00:37:52.226 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.226 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.226 filename0: (groupid=0, jobs=1): err= 0: pid=2925173: Mon Dec 9 10:48:28 2024 00:37:52.226 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(329MiB/10046msec) 00:37:52.226 slat (nsec): min=6289, max=44445, avg=17506.12, stdev=8401.50 00:37:52.226 clat (usec): min=8887, max=52873, avg=11410.74, stdev=1839.56 00:37:52.226 lat (usec): min=8898, max=52899, avg=11428.24, stdev=1839.84 00:37:52.226 clat percentiles (usec): 00:37:52.226 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:37:52.226 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:37:52.226 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:37:52.226 | 99.00th=[13435], 99.50th=[13829], 99.90th=[51119], 99.95th=[51119], 00:37:52.226 | 99.99th=[52691] 00:37:52.226 bw ( KiB/s): min=32256, max=34816, per=31.88%, avg=33664.00, stdev=528.57, samples=20 00:37:52.226 iops : min= 252, max= 272, avg=263.00, stdev= 4.13, samples=20 00:37:52.226 lat (msec) : 10=3.68%, 20=96.13%, 50=0.08%, 100=0.11% 00:37:52.226 cpu : usr=96.07%, sys=3.62%, ctx=14, majf=0, minf=48 00:37:52.226 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.227 issued rwts: total=2633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.227 00:37:52.227 Run status group 0 (all jobs): 00:37:52.227 READ: bw=103MiB/s (108MB/s), 32.8MiB/s-36.8MiB/s (34.4MB/s-38.6MB/s), io=1036MiB (1087MB), run=10046-10048msec 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.227 00:37:52.227 real 0m11.336s 00:37:52.227 user 0m35.556s 00:37:52.227 sys 0m1.444s 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.227 10:48:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.227 ************************************ 00:37:52.227 END TEST fio_dif_digest 00:37:52.227 ************************************ 00:37:52.227 10:48:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:52.227 10:48:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:52.227 rmmod nvme_tcp 00:37:52.227 rmmod nvme_fabrics 00:37:52.227 rmmod nvme_keyring 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2916130 ']' 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2916130 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2916130 ']' 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2916130 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2916130 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2916130' 00:37:52.227 killing process with pid 2916130 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2916130 00:37:52.227 10:48:28 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2916130 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:52.227 10:48:28 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:54.137 Waiting for block devices as requested 00:37:54.137 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:54.137 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:54.137 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:54.137 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:54.137 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:54.397 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:54.397 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:54.397 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:54.656 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:54.656 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:54.656 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:54.656 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:54.916 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:54.916 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:54.916 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:55.176 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:55.176 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:55.176 10:48:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.176 10:48:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:55.176 10:48:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.716 10:48:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:57.716 00:37:57.716 real 1m14.536s 00:37:57.716 user 7m9.768s 00:37:57.716 sys 0m20.138s 00:37:57.716 10:48:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.716 10:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:57.716 ************************************ 00:37:57.716 END TEST nvmf_dif 00:37:57.716 ************************************ 00:37:57.716 10:48:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:57.716 10:48:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:57.716 10:48:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.716 10:48:34 -- common/autotest_common.sh@10 -- # set +x 00:37:57.716 ************************************ 00:37:57.716 START TEST nvmf_abort_qd_sizes 00:37:57.717 ************************************ 00:37:57.717 10:48:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:57.717 * Looking for test storage... 00:37:57.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:57.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.717 --rc genhtml_branch_coverage=1 00:37:57.717 --rc genhtml_function_coverage=1 00:37:57.717 --rc genhtml_legend=1 00:37:57.717 --rc geninfo_all_blocks=1 00:37:57.717 --rc geninfo_unexecuted_blocks=1 00:37:57.717 00:37:57.717 ' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:57.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.717 --rc genhtml_branch_coverage=1 00:37:57.717 --rc genhtml_function_coverage=1 00:37:57.717 --rc genhtml_legend=1 00:37:57.717 --rc geninfo_all_blocks=1 00:37:57.717 --rc geninfo_unexecuted_blocks=1 00:37:57.717 00:37:57.717 ' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:57.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.717 --rc genhtml_branch_coverage=1 00:37:57.717 --rc genhtml_function_coverage=1 00:37:57.717 --rc genhtml_legend=1 00:37:57.717 --rc geninfo_all_blocks=1 00:37:57.717 --rc geninfo_unexecuted_blocks=1 00:37:57.717 00:37:57.717 ' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:57.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.717 --rc genhtml_branch_coverage=1 00:37:57.717 --rc genhtml_function_coverage=1 00:37:57.717 --rc genhtml_legend=1 00:37:57.717 --rc geninfo_all_blocks=1 00:37:57.717 --rc geninfo_unexecuted_blocks=1 00:37:57.717 00:37:57.717 ' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:57.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:57.717 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:57.718 10:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:04.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:04.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:04.293 Found net devices under 0000:86:00.0: cvl_0_0 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:04.293 Found net devices under 0000:86:00.1: cvl_0_1 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.293 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:04.294 10:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:04.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:38:04.294 00:38:04.294 --- 10.0.0.2 ping statistics --- 00:38:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.294 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:38:04.294 00:38:04.294 --- 10.0.0.1 ping statistics --- 00:38:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.294 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:04.294 10:48:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:06.202 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:06.202 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:06.462 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:07.843 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2933045 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2933045 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2933045 ']' 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.843 10:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:07.843 [2024-12-09 10:48:45.503053] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:38:07.843 [2024-12-09 10:48:45.503099] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.101 [2024-12-09 10:48:45.584174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:08.101 [2024-12-09 10:48:45.626083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.101 [2024-12-09 10:48:45.626144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.101 [2024-12-09 10:48:45.626152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.101 [2024-12-09 10:48:45.626159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.101 [2024-12-09 10:48:45.626165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.101 [2024-12-09 10:48:45.627689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.101 [2024-12-09 10:48:45.627797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.101 [2024-12-09 10:48:45.627927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.101 [2024-12-09 10:48:45.627927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.668 10:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:08.927 ************************************ 00:38:08.927 START TEST spdk_target_abort 00:38:08.927 ************************************ 00:38:08.927 10:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:08.927 10:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:08.927 10:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:38:08.927 10:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.927 10:48:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.212 spdk_targetn1 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.212 [2024-12-09 10:48:49.259081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.212 [2024-12-09 10:48:49.311413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:12.212 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.213 10:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:15.494 Initializing NVMe Controllers 00:38:15.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:15.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:15.494 Initialization complete. Launching workers. 00:38:15.494 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15676, failed: 0 00:38:15.494 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1183, failed to submit 14493 00:38:15.494 success 717, unsuccessful 466, failed 0 00:38:15.494 10:48:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:15.494 10:48:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:18.781 Initializing NVMe Controllers 00:38:18.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:18.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:18.781 Initialization complete. Launching workers. 00:38:18.781 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8517, failed: 0 00:38:18.781 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7296 00:38:18.781 success 312, unsuccessful 909, failed 0 00:38:18.781 10:48:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:18.781 10:48:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:22.109 Initializing NVMe Controllers 00:38:22.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:22.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:22.109 Initialization complete. Launching workers. 00:38:22.109 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38532, failed: 0 00:38:22.109 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2768, failed to submit 35764 00:38:22.109 success 579, unsuccessful 2189, failed 0 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.109 10:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2933045 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2933045 ']' 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2933045 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2933045 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2933045' 00:38:23.490 killing process with pid 2933045 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2933045 00:38:23.490 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2933045 00:38:23.752 00:38:23.752 real 0m14.877s 00:38:23.752 user 0m59.208s 00:38:23.752 sys 0m2.645s 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.752 ************************************ 00:38:23.752 END TEST spdk_target_abort 00:38:23.752 ************************************ 00:38:23.752 10:49:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:23.752 10:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:23.752 10:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.752 10:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.752 ************************************ 00:38:23.752 START TEST kernel_target_abort 00:38:23.752 ************************************ 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:23.752 10:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:27.048 Waiting for block devices as requested 00:38:27.048 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:27.048 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:27.048 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:27.308 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:27.308 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:27.308 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:27.566 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:27.566 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:27.566 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:27.566 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:27.826 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:27.826 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:27.826 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:28.085 No valid GPT data, bailing 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:28.085 00:38:28.085 Discovery Log Number of Records 2, Generation counter 2 00:38:28.085 =====Discovery Log Entry 0====== 00:38:28.085 trtype: tcp 00:38:28.085 adrfam: ipv4 00:38:28.085 subtype: current discovery subsystem 00:38:28.085 treq: not specified, sq flow control disable supported 00:38:28.085 portid: 1 00:38:28.085 trsvcid: 4420 00:38:28.085 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:28.085 traddr: 10.0.0.1 00:38:28.085 eflags: none 00:38:28.085 sectype: none 00:38:28.085 =====Discovery Log Entry 1====== 00:38:28.085 trtype: tcp 00:38:28.085 adrfam: ipv4 00:38:28.085 subtype: nvme subsystem 00:38:28.085 treq: not specified, sq flow control disable supported 00:38:28.085 portid: 1 00:38:28.085 trsvcid: 4420 00:38:28.085 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:28.085 traddr: 10.0.0.1 00:38:28.085 eflags: none 00:38:28.085 sectype: none 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.085 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:28.086 10:49:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:31.368 Initializing NVMe Controllers 00:38:31.368 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:31.368 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:31.368 Initialization complete. Launching workers. 00:38:31.368 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94582, failed: 0 00:38:31.368 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94582, failed to submit 0 00:38:31.368 success 0, unsuccessful 94582, failed 0 00:38:31.368 10:49:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:31.368 10:49:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:34.648 Initializing NVMe Controllers 00:38:34.648 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:34.648 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:34.648 Initialization complete. Launching workers. 00:38:34.648 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147338, failed: 0 00:38:34.648 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37150, failed to submit 110188 00:38:34.648 success 0, unsuccessful 37150, failed 0 00:38:34.648 10:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:34.648 10:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:37.932 Initializing NVMe Controllers 00:38:37.932 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:37.932 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:37.932 Initialization complete. Launching workers. 00:38:37.932 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140832, failed: 0 00:38:37.932 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35258, failed to submit 105574 00:38:37.932 success 0, unsuccessful 35258, failed 0 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:37.932 10:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:40.475 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:40.475 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:41.859 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:41.859 00:38:41.859 real 0m18.056s 00:38:41.859 user 0m9.203s 00:38:41.859 sys 0m5.015s 00:38:41.859 10:49:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.859 10:49:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:41.859 ************************************ 00:38:41.859 END TEST kernel_target_abort 00:38:41.859 ************************************ 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.859 rmmod nvme_tcp 00:38:41.859 rmmod nvme_fabrics 00:38:41.859 rmmod nvme_keyring 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2933045 ']' 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2933045 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2933045 ']' 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2933045 00:38:41.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2933045) - No such process 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2933045 is not found' 00:38:41.859 Process with pid 2933045 is not found 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:41.859 10:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:45.157 Waiting for block devices as requested 00:38:45.157 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:45.157 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:45.157 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:45.157 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:45.157 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:45.157 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:45.158 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:45.158 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:45.418 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:45.418 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:45.418 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:45.678 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:45.678 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:45.678 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:45.938 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:45.938 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:45.938 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:45.938 10:49:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.477 10:49:25 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.477 00:38:48.477 real 0m50.735s 00:38:48.477 user 1m12.962s 00:38:48.477 sys 0m16.396s 00:38:48.477 10:49:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.477 10:49:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:48.477 ************************************ 00:38:48.477 END TEST nvmf_abort_qd_sizes 00:38:48.477 ************************************ 00:38:48.477 10:49:25 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:48.477 10:49:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:48.477 10:49:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.477 10:49:25 -- common/autotest_common.sh@10 -- # set +x 00:38:48.477 ************************************ 00:38:48.477 START TEST keyring_file 00:38:48.477 ************************************ 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:48.477 * Looking for test storage... 00:38:48.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.477 10:49:25 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.477 --rc genhtml_branch_coverage=1 00:38:48.477 --rc genhtml_function_coverage=1 00:38:48.477 --rc genhtml_legend=1 00:38:48.477 --rc geninfo_all_blocks=1 00:38:48.477 --rc geninfo_unexecuted_blocks=1 00:38:48.477 00:38:48.477 ' 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.477 --rc genhtml_branch_coverage=1 00:38:48.477 --rc genhtml_function_coverage=1 00:38:48.477 --rc genhtml_legend=1 00:38:48.477 --rc geninfo_all_blocks=1 00:38:48.477 --rc geninfo_unexecuted_blocks=1 00:38:48.477 00:38:48.477 ' 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.477 --rc genhtml_branch_coverage=1 00:38:48.477 --rc genhtml_function_coverage=1 00:38:48.477 --rc genhtml_legend=1 00:38:48.477 --rc geninfo_all_blocks=1 00:38:48.477 --rc geninfo_unexecuted_blocks=1 00:38:48.477 00:38:48.477 ' 00:38:48.477 10:49:25 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.477 --rc genhtml_branch_coverage=1 00:38:48.477 --rc genhtml_function_coverage=1 00:38:48.477 --rc genhtml_legend=1 00:38:48.477 --rc geninfo_all_blocks=1 00:38:48.477 --rc geninfo_unexecuted_blocks=1 00:38:48.478 00:38:48.478 ' 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.478 10:49:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.478 10:49:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.478 10:49:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.478 10:49:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.478 10:49:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.478 10:49:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.478 10:49:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.478 10:49:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:48.478 10:49:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:48.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:48.478 10:49:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:48.478 10:49:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:48.478 10:49:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vxZdbzdRzf 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vxZdbzdRzf 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vxZdbzdRzf 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vxZdbzdRzf 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jMwBudL09u 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:48.478 10:49:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jMwBudL09u 00:38:48.478 10:49:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jMwBudL09u 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jMwBudL09u 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=2942055 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:48.478 10:49:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2942055 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2942055 ']' 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.478 10:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:48.478 [2024-12-09 10:49:26.144274] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:38:48.478 [2024-12-09 10:49:26.144324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942055 ] 00:38:48.737 [2024-12-09 10:49:26.220469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.737 [2024-12-09 10:49:26.263649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:48.996 10:49:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:48.996 [2024-12-09 10:49:26.474457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.996 null0 00:38:48.996 [2024-12-09 10:49:26.506494] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:48.996 [2024-12-09 10:49:26.506832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.996 10:49:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:48.996 [2024-12-09 10:49:26.534556] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:48.996 request: 00:38:48.996 { 00:38:48.996 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:48.996 "secure_channel": false, 00:38:48.996 "listen_address": { 00:38:48.996 "trtype": "tcp", 00:38:48.996 "traddr": "127.0.0.1", 00:38:48.996 "trsvcid": "4420" 00:38:48.996 }, 00:38:48.996 "method": "nvmf_subsystem_add_listener", 00:38:48.996 "req_id": 1 00:38:48.996 } 00:38:48.996 Got JSON-RPC error response 00:38:48.996 response: 00:38:48.996 { 00:38:48.996 "code": -32602, 00:38:48.996 "message": "Invalid parameters" 00:38:48.996 } 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:48.996 10:49:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=2942065 00:38:48.996 10:49:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2942065 /var/tmp/bperf.sock 00:38:48.996 10:49:26 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2942065 ']' 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.996 10:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:48.996 [2024-12-09 10:49:26.589449] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:38:48.996 [2024-12-09 10:49:26.589492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942065 ] 00:38:48.996 [2024-12-09 10:49:26.664286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.996 [2024-12-09 10:49:26.706314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.255 10:49:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.255 10:49:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:49.255 10:49:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:49.255 10:49:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:49.513 10:49:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jMwBudL09u 00:38:49.513 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jMwBudL09u 00:38:49.513 10:49:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:49.513 10:49:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:49.513 10:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.513 10:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:49.513 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.770 10:49:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vxZdbzdRzf == \/\t\m\p\/\t\m\p\.\v\x\Z\d\b\z\d\R\z\f ]] 00:38:49.770 10:49:27 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:49.770 10:49:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:49.770 10:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.770 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.770 10:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:50.027 10:49:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.jMwBudL09u == \/\t\m\p\/\t\m\p\.\j\M\w\B\u\d\L\0\9\u ]] 00:38:50.027 10:49:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:50.027 10:49:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.027 10:49:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:50.027 10:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.027 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.028 10:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.285 10:49:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:50.285 10:49:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.285 10:49:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:50.285 10:49:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.285 10:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.542 [2024-12-09 10:49:28.166719] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:50.542 nvme0n1 00:38:50.542 10:49:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:50.542 10:49:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:50.542 10:49:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.542 10:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.542 10:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.542 10:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.799 10:49:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:50.799 10:49:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:50.799 10:49:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:50.799 10:49:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.799 10:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.799 10:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:50.799 10:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.056 10:49:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:51.056 10:49:28 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:51.056 Running I/O for 1 seconds... 00:38:52.424 19263.00 IOPS, 75.25 MiB/s 00:38:52.424 Latency(us) 00:38:52.424 [2024-12-09T09:49:30.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:52.424 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:52.425 nvme0n1 : 1.00 19308.74 75.42 0.00 0.00 6616.99 2761.87 18724.57 00:38:52.425 [2024-12-09T09:49:30.149Z] =================================================================================================================== 00:38:52.425 [2024-12-09T09:49:30.149Z] Total : 19308.74 75.42 0.00 0.00 6616.99 2761.87 18724.57 00:38:52.425 { 00:38:52.425 "results": [ 00:38:52.425 { 00:38:52.425 "job": "nvme0n1", 00:38:52.425 "core_mask": "0x2", 00:38:52.425 "workload": "randrw", 00:38:52.425 "percentage": 50, 00:38:52.425 "status": "finished", 00:38:52.425 "queue_depth": 128, 00:38:52.425 "io_size": 4096, 00:38:52.425 "runtime": 1.004364, 00:38:52.425 "iops": 19308.73667315834, 00:38:52.425 "mibps": 75.42475262952476, 00:38:52.425 "io_failed": 0, 00:38:52.425 "io_timeout": 0, 00:38:52.425 "avg_latency_us": 6616.987207460718, 00:38:52.425 "min_latency_us": 2761.8742857142856, 00:38:52.425 "max_latency_us": 18724.571428571428 00:38:52.425 } 00:38:52.425 ], 00:38:52.425 "core_count": 1 00:38:52.425 } 00:38:52.425 10:49:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:52.425 10:49:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.425 10:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.682 10:49:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:52.682 10:49:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.682 10:49:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:52.682 10:49:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.682 10:49:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:52.682 10:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:52.944 [2024-12-09 10:49:30.538960] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:52.944 [2024-12-09 10:49:30.539195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceee30 (107): Transport endpoint is not connected 00:38:52.944 [2024-12-09 10:49:30.540190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceee30 (9): Bad file descriptor 00:38:52.944 [2024-12-09 10:49:30.541191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:52.944 [2024-12-09 10:49:30.541202] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:52.944 [2024-12-09 10:49:30.541208] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:52.944 [2024-12-09 10:49:30.541216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:52.944 request: 00:38:52.944 { 00:38:52.944 "name": "nvme0", 00:38:52.944 "trtype": "tcp", 00:38:52.944 "traddr": "127.0.0.1", 00:38:52.944 "adrfam": "ipv4", 00:38:52.944 "trsvcid": "4420", 00:38:52.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.944 "prchk_reftag": false, 00:38:52.944 "prchk_guard": false, 00:38:52.944 "hdgst": false, 00:38:52.944 "ddgst": false, 00:38:52.944 "psk": "key1", 00:38:52.944 "allow_unrecognized_csi": false, 00:38:52.944 "method": "bdev_nvme_attach_controller", 00:38:52.944 "req_id": 1 00:38:52.944 } 00:38:52.944 Got JSON-RPC error response 00:38:52.944 response: 00:38:52.944 { 00:38:52.944 "code": -5, 00:38:52.944 "message": "Input/output error" 00:38:52.944 } 00:38:52.944 10:49:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:52.944 10:49:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:52.944 10:49:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:52.944 10:49:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:52.944 10:49:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:52.944 10:49:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.944 10:49:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.944 10:49:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.944 10:49:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.944 10:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.201 10:49:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:53.201 10:49:30 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:53.201 10:49:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:53.201 10:49:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:53.201 10:49:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:53.201 10:49:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:53.201 10:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.459 10:49:30 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:53.459 10:49:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:53.459 10:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:53.459 10:49:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:53.459 10:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:53.716 10:49:31 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:53.716 10:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.716 10:49:31 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:53.974 10:49:31 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:53.974 10:49:31 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vxZdbzdRzf 00:38:53.974 10:49:31 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.974 10:49:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:53.974 10:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:53.974 [2024-12-09 10:49:31.681203] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vxZdbzdRzf': 0100660 00:38:53.974 [2024-12-09 10:49:31.681231] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:53.974 request: 00:38:53.974 { 00:38:53.974 "name": "key0", 00:38:53.974 "path": "/tmp/tmp.vxZdbzdRzf", 00:38:53.974 "method": "keyring_file_add_key", 00:38:53.974 "req_id": 1 00:38:53.974 } 00:38:53.974 Got JSON-RPC error response 00:38:53.974 response: 00:38:53.974 { 00:38:53.974 "code": -1, 00:38:53.974 "message": "Operation not permitted" 00:38:53.974 } 00:38:54.232 10:49:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:54.232 10:49:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:54.232 10:49:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:54.232 10:49:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:54.232 10:49:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vxZdbzdRzf 00:38:54.232 10:49:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vxZdbzdRzf 00:38:54.232 10:49:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vxZdbzdRzf 00:38:54.232 10:49:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.232 10:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.489 10:49:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:54.489 10:49:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.489 10:49:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:54.489 10:49:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.490 10:49:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:54.490 10:49:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:54.490 10:49:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:54.490 10:49:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:54.490 10:49:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.490 10:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.747 [2024-12-09 10:49:32.266747] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vxZdbzdRzf': No such file or directory 00:38:54.747 [2024-12-09 10:49:32.266772] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:54.747 [2024-12-09 10:49:32.266787] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:54.747 [2024-12-09 10:49:32.266793] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:54.747 [2024-12-09 10:49:32.266800] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:54.747 [2024-12-09 10:49:32.266806] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:54.747 request: 00:38:54.747 { 00:38:54.747 "name": "nvme0", 00:38:54.747 "trtype": "tcp", 00:38:54.747 "traddr": "127.0.0.1", 00:38:54.747 "adrfam": "ipv4", 00:38:54.747 "trsvcid": "4420", 00:38:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.747 "prchk_reftag": false, 00:38:54.747 "prchk_guard": false, 00:38:54.747 "hdgst": false, 00:38:54.747 "ddgst": false, 00:38:54.747 "psk": "key0", 00:38:54.747 "allow_unrecognized_csi": false, 00:38:54.747 "method": "bdev_nvme_attach_controller", 00:38:54.747 "req_id": 1 00:38:54.747 } 00:38:54.747 Got JSON-RPC error response 00:38:54.747 response: 00:38:54.747 { 00:38:54.747 "code": -19, 00:38:54.747 "message": "No such device" 00:38:54.747 } 00:38:54.747 10:49:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:54.747 10:49:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:54.747 10:49:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:54.747 10:49:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:54.747 10:49:32 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:54.747 10:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:55.005 10:49:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.o3W9rRTGrC 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:55.005 10:49:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o3W9rRTGrC 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.o3W9rRTGrC 00:38:55.005 10:49:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.o3W9rRTGrC 00:38:55.005 10:49:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o3W9rRTGrC 00:38:55.005 10:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o3W9rRTGrC 00:38:55.261 10:49:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.262 10:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.519 nvme0n1 00:38:55.519 10:49:33 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:55.519 10:49:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:55.519 10:49:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:55.519 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:55.776 10:49:33 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:55.776 10:49:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:55.777 10:49:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:55.777 10:49:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:55.777 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.034 10:49:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:56.034 10:49:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:56.034 10:49:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:56.034 10:49:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.034 10:49:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.034 10:49:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:56.034 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.291 10:49:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:56.291 10:49:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:56.291 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:56.291 10:49:33 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:56.291 10:49:33 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:56.291 10:49:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.548 10:49:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:56.548 10:49:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o3W9rRTGrC 00:38:56.548 10:49:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o3W9rRTGrC 00:38:56.806 10:49:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jMwBudL09u 00:38:56.806 10:49:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jMwBudL09u 00:38:57.063 10:49:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:57.063 10:49:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:57.320 nvme0n1 00:38:57.320 10:49:34 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:57.320 10:49:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:57.583 10:49:35 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:57.583 "subsystems": [ 00:38:57.583 { 00:38:57.583 "subsystem": "keyring", 00:38:57.583 "config": [ 00:38:57.583 { 00:38:57.583 "method": "keyring_file_add_key", 00:38:57.583 "params": { 00:38:57.583 "name": "key0", 00:38:57.583 "path": "/tmp/tmp.o3W9rRTGrC" 00:38:57.583 } 00:38:57.583 }, 00:38:57.583 { 00:38:57.583 "method": "keyring_file_add_key", 00:38:57.583 "params": { 00:38:57.583 "name": "key1", 00:38:57.583 "path": "/tmp/tmp.jMwBudL09u" 00:38:57.584 } 00:38:57.584 } 00:38:57.584 ] 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "subsystem": "iobuf", 00:38:57.584 "config": [ 00:38:57.584 { 00:38:57.584 "method": "iobuf_set_options", 00:38:57.584 "params": { 00:38:57.584 "small_pool_count": 8192, 00:38:57.584 "large_pool_count": 1024, 00:38:57.584 "small_bufsize": 8192, 00:38:57.584 "large_bufsize": 135168, 00:38:57.584 "enable_numa": false 00:38:57.584 } 00:38:57.584 } 00:38:57.584 ] 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "subsystem": "sock", 00:38:57.584 "config": [ 00:38:57.584 { 00:38:57.584 "method": "sock_set_default_impl", 00:38:57.584 "params": { 00:38:57.584 "impl_name": "posix" 00:38:57.584 } 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "method": "sock_impl_set_options", 00:38:57.584 "params": { 00:38:57.584 "impl_name": "ssl", 00:38:57.584 "recv_buf_size": 4096, 00:38:57.584 "send_buf_size": 4096, 00:38:57.584 "enable_recv_pipe": true, 00:38:57.584 "enable_quickack": false, 00:38:57.584 "enable_placement_id": 0, 00:38:57.584 "enable_zerocopy_send_server": true, 00:38:57.584 "enable_zerocopy_send_client": false, 00:38:57.584 "zerocopy_threshold": 0, 00:38:57.584 "tls_version": 0, 00:38:57.584 "enable_ktls": false 00:38:57.584 } 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "method": "sock_impl_set_options", 00:38:57.584 "params": { 00:38:57.584 "impl_name": "posix", 00:38:57.584 "recv_buf_size": 2097152, 00:38:57.584 "send_buf_size": 2097152, 00:38:57.584 "enable_recv_pipe": true, 00:38:57.584 "enable_quickack": false, 00:38:57.584 "enable_placement_id": 0, 00:38:57.584 "enable_zerocopy_send_server": true, 00:38:57.584 "enable_zerocopy_send_client": false, 00:38:57.584 "zerocopy_threshold": 0, 00:38:57.584 "tls_version": 0, 00:38:57.584 "enable_ktls": false 00:38:57.584 } 00:38:57.584 } 00:38:57.584 ] 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "subsystem": "vmd", 00:38:57.584 "config": [] 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "subsystem": "accel", 00:38:57.584 "config": [ 00:38:57.584 { 00:38:57.584 "method": "accel_set_options", 00:38:57.584 "params": { 00:38:57.584 "small_cache_size": 128, 00:38:57.584 "large_cache_size": 16, 00:38:57.584 "task_count": 2048, 00:38:57.584 "sequence_count": 2048, 00:38:57.584 "buf_count": 2048 00:38:57.584 } 00:38:57.584 } 00:38:57.584 ] 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "subsystem": "bdev", 00:38:57.584 "config": [ 00:38:57.584 { 00:38:57.584 "method": "bdev_set_options", 00:38:57.584 "params": { 00:38:57.584 "bdev_io_pool_size": 65535, 00:38:57.584 "bdev_io_cache_size": 256, 00:38:57.584 "bdev_auto_examine": true, 00:38:57.584 "iobuf_small_cache_size": 128, 00:38:57.584 "iobuf_large_cache_size": 16 00:38:57.584 } 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "method": "bdev_raid_set_options", 00:38:57.584 "params": { 00:38:57.584 "process_window_size_kb": 1024, 00:38:57.584 "process_max_bandwidth_mb_sec": 0 00:38:57.584 } 00:38:57.584 }, 00:38:57.584 { 00:38:57.584 "method": "bdev_iscsi_set_options", 00:38:57.584 "params": { 00:38:57.584 "timeout_sec": 30 00:38:57.584 } 00:38:57.585 }, 00:38:57.585 { 00:38:57.585 "method": "bdev_nvme_set_options", 00:38:57.585 "params": { 00:38:57.585 "action_on_timeout": "none", 00:38:57.585 "timeout_us": 0, 00:38:57.585 "timeout_admin_us": 0, 00:38:57.585 "keep_alive_timeout_ms": 10000, 00:38:57.585 "arbitration_burst": 0, 00:38:57.585 "low_priority_weight": 0, 00:38:57.585 "medium_priority_weight": 0, 00:38:57.585 "high_priority_weight": 0, 00:38:57.585 "nvme_adminq_poll_period_us": 10000, 00:38:57.585 "nvme_ioq_poll_period_us": 0, 00:38:57.585 "io_queue_requests": 512, 00:38:57.585 "delay_cmd_submit": true, 00:38:57.585 "transport_retry_count": 4, 00:38:57.585 "bdev_retry_count": 3, 00:38:57.585 "transport_ack_timeout": 0, 00:38:57.585 "ctrlr_loss_timeout_sec": 0, 00:38:57.585 "reconnect_delay_sec": 0, 00:38:57.585 "fast_io_fail_timeout_sec": 0, 00:38:57.585 "disable_auto_failback": false, 00:38:57.585 "generate_uuids": false, 00:38:57.585 "transport_tos": 0, 00:38:57.585 "nvme_error_stat": false, 00:38:57.585 "rdma_srq_size": 0, 00:38:57.585 "io_path_stat": false, 00:38:57.585 "allow_accel_sequence": false, 00:38:57.585 "rdma_max_cq_size": 0, 00:38:57.585 "rdma_cm_event_timeout_ms": 0, 00:38:57.585 "dhchap_digests": [ 00:38:57.585 "sha256", 00:38:57.585 "sha384", 00:38:57.585 "sha512" 00:38:57.585 ], 00:38:57.585 "dhchap_dhgroups": [ 00:38:57.585 "null", 00:38:57.585 "ffdhe2048", 00:38:57.585 "ffdhe3072", 00:38:57.585 "ffdhe4096", 00:38:57.585 "ffdhe6144", 00:38:57.585 "ffdhe8192" 00:38:57.585 ] 00:38:57.585 } 00:38:57.585 }, 00:38:57.585 { 00:38:57.585 "method": "bdev_nvme_attach_controller", 00:38:57.585 "params": { 00:38:57.585 "name": "nvme0", 00:38:57.585 "trtype": "TCP", 00:38:57.585 "adrfam": "IPv4", 00:38:57.585 "traddr": "127.0.0.1", 00:38:57.585 "trsvcid": "4420", 00:38:57.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.585 "prchk_reftag": false, 00:38:57.585 "prchk_guard": false, 00:38:57.585 "ctrlr_loss_timeout_sec": 0, 00:38:57.585 "reconnect_delay_sec": 0, 00:38:57.585 "fast_io_fail_timeout_sec": 0, 00:38:57.585 "psk": "key0", 00:38:57.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:57.585 "hdgst": false, 00:38:57.585 "ddgst": false, 00:38:57.585 "multipath": "multipath" 00:38:57.585 } 00:38:57.585 }, 00:38:57.585 { 00:38:57.585 "method": "bdev_nvme_set_hotplug", 00:38:57.585 "params": { 00:38:57.585 "period_us": 100000, 00:38:57.585 "enable": false 00:38:57.585 } 00:38:57.585 }, 00:38:57.585 { 00:38:57.585 "method": "bdev_wait_for_examine" 00:38:57.585 } 00:38:57.585 ] 00:38:57.585 }, 00:38:57.585 { 00:38:57.585 "subsystem": "nbd", 00:38:57.585 "config": [] 00:38:57.585 } 00:38:57.585 ] 00:38:57.585 }' 00:38:57.585 10:49:35 keyring_file -- keyring/file.sh@115 -- # killprocess 2942065 00:38:57.585 10:49:35 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2942065 ']' 00:38:57.585 10:49:35 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2942065 00:38:57.585 10:49:35 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:57.585 10:49:35 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942065 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942065' 00:38:57.586 killing process with pid 2942065 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@973 -- # kill 2942065 00:38:57.586 Received shutdown signal, test time was about 1.000000 seconds 00:38:57.586 00:38:57.586 Latency(us) 00:38:57.586 [2024-12-09T09:49:35.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.586 [2024-12-09T09:49:35.310Z] =================================================================================================================== 00:38:57.586 [2024-12-09T09:49:35.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@978 -- # wait 2942065 00:38:57.586 10:49:35 keyring_file -- keyring/file.sh@118 -- # bperfpid=2943578 00:38:57.586 10:49:35 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2943578 /var/tmp/bperf.sock 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2943578 ']' 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:57.586 10:49:35 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.586 10:49:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:57.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:57.586 10:49:35 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:57.586 "subsystems": [ 00:38:57.586 { 00:38:57.586 "subsystem": "keyring", 00:38:57.586 "config": [ 00:38:57.586 { 00:38:57.586 "method": "keyring_file_add_key", 00:38:57.586 "params": { 00:38:57.586 "name": "key0", 00:38:57.586 "path": "/tmp/tmp.o3W9rRTGrC" 00:38:57.586 } 00:38:57.586 }, 00:38:57.586 { 00:38:57.586 "method": "keyring_file_add_key", 00:38:57.586 "params": { 00:38:57.586 "name": "key1", 00:38:57.586 "path": "/tmp/tmp.jMwBudL09u" 00:38:57.586 } 00:38:57.586 } 00:38:57.586 ] 00:38:57.586 }, 00:38:57.586 { 00:38:57.586 "subsystem": "iobuf", 00:38:57.586 "config": [ 00:38:57.586 { 00:38:57.586 "method": "iobuf_set_options", 00:38:57.586 "params": { 00:38:57.586 "small_pool_count": 8192, 00:38:57.586 "large_pool_count": 1024, 00:38:57.586 "small_bufsize": 8192, 00:38:57.586 "large_bufsize": 135168, 00:38:57.586 "enable_numa": false 00:38:57.586 } 00:38:57.586 } 00:38:57.586 ] 00:38:57.586 }, 00:38:57.586 { 00:38:57.586 "subsystem": "sock", 00:38:57.586 "config": [ 00:38:57.586 { 00:38:57.586 "method": "sock_set_default_impl", 00:38:57.586 "params": { 00:38:57.586 "impl_name": "posix" 00:38:57.586 } 00:38:57.586 }, 00:38:57.586 { 00:38:57.586 "method": "sock_impl_set_options", 00:38:57.586 "params": { 00:38:57.586 "impl_name": "ssl", 00:38:57.586 "recv_buf_size": 4096, 00:38:57.586 "send_buf_size": 4096, 00:38:57.586 "enable_recv_pipe": true, 00:38:57.586 "enable_quickack": false, 00:38:57.587 "enable_placement_id": 0, 00:38:57.587 "enable_zerocopy_send_server": true, 00:38:57.587 "enable_zerocopy_send_client": false, 00:38:57.587 "zerocopy_threshold": 0, 00:38:57.587 "tls_version": 0, 00:38:57.587 "enable_ktls": false 00:38:57.587 } 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "method": "sock_impl_set_options", 00:38:57.587 "params": { 00:38:57.587 "impl_name": "posix", 00:38:57.587 "recv_buf_size": 2097152, 00:38:57.587 "send_buf_size": 2097152, 00:38:57.587 "enable_recv_pipe": true, 00:38:57.587 "enable_quickack": false, 00:38:57.587 "enable_placement_id": 0, 00:38:57.587 "enable_zerocopy_send_server": true, 00:38:57.587 "enable_zerocopy_send_client": false, 00:38:57.587 "zerocopy_threshold": 0, 00:38:57.587 "tls_version": 0, 00:38:57.587 "enable_ktls": false 00:38:57.587 } 00:38:57.587 } 00:38:57.587 ] 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "subsystem": "vmd", 00:38:57.587 "config": [] 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "subsystem": "accel", 00:38:57.587 "config": [ 00:38:57.587 { 00:38:57.587 "method": "accel_set_options", 00:38:57.587 "params": { 00:38:57.587 "small_cache_size": 128, 00:38:57.587 "large_cache_size": 16, 00:38:57.587 "task_count": 2048, 00:38:57.587 "sequence_count": 2048, 00:38:57.587 "buf_count": 2048 00:38:57.587 } 00:38:57.587 } 00:38:57.587 ] 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "subsystem": "bdev", 00:38:57.587 "config": [ 00:38:57.587 { 00:38:57.587 "method": "bdev_set_options", 00:38:57.587 "params": { 00:38:57.587 "bdev_io_pool_size": 65535, 00:38:57.587 "bdev_io_cache_size": 256, 00:38:57.587 "bdev_auto_examine": true, 00:38:57.587 "iobuf_small_cache_size": 128, 00:38:57.587 "iobuf_large_cache_size": 16 00:38:57.587 } 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "method": "bdev_raid_set_options", 00:38:57.587 "params": { 00:38:57.587 "process_window_size_kb": 1024, 00:38:57.587 "process_max_bandwidth_mb_sec": 0 00:38:57.587 } 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "method": "bdev_iscsi_set_options", 00:38:57.587 "params": { 00:38:57.587 "timeout_sec": 30 00:38:57.587 } 00:38:57.587 }, 00:38:57.587 { 00:38:57.587 "method": "bdev_nvme_set_options", 00:38:57.587 "params": { 00:38:57.587 "action_on_timeout": "none", 00:38:57.587 "timeout_us": 0, 00:38:57.587 "timeout_admin_us": 0, 00:38:57.587 "keep_alive_timeout_ms": 10000, 00:38:57.587 "arbitration_burst": 0, 00:38:57.587 "low_priority_weight": 0, 00:38:57.587 "medium_priority_weight": 0, 00:38:57.587 "high_priority_weight": 0, 00:38:57.587 "nvme_adminq_poll_period_us": 10000, 00:38:57.587 "nvme_ioq_poll_period_us": 0, 00:38:57.587 "io_queue_requests": 512, 00:38:57.588 "delay_cmd_submit": true, 00:38:57.588 "transport_retry_count": 4, 00:38:57.588 "bdev_retry_count": 3, 00:38:57.588 "transport_ack_timeout": 0, 00:38:57.588 "ctrlr_loss_timeout_sec": 0, 00:38:57.588 "reconnect_delay_sec": 0, 00:38:57.588 "fast_io_fail_timeout_sec": 0, 00:38:57.588 "disable_auto_failback": false, 00:38:57.588 "generate_uuids": false, 00:38:57.588 "transport_tos": 0, 00:38:57.588 "nvme_error_stat": false, 00:38:57.588 "rdma_srq_size": 0, 00:38:57.588 "io_path_stat": false, 00:38:57.588 "allow_accel_sequence": false, 00:38:57.588 "rdma_max_cq_size": 0, 00:38:57.588 "rdma_cm_event_timeout_ms": 0, 00:38:57.588 "dhchap_digests": [ 00:38:57.588 "sha256", 00:38:57.588 "sha384", 00:38:57.588 "sha512" 00:38:57.588 ], 00:38:57.588 "dhchap_dhgroups": [ 00:38:57.588 "null", 00:38:57.588 "ffdhe2048", 00:38:57.588 "ffdhe3072", 00:38:57.588 "ffdhe4096", 00:38:57.588 "ffdhe6144", 00:38:57.588 "ffdhe8192" 00:38:57.588 ] 00:38:57.588 } 00:38:57.588 }, 00:38:57.588 { 00:38:57.588 "method": "bdev_nvme_attach_controller", 00:38:57.588 "params": { 00:38:57.588 "name": "nvme0", 00:38:57.588 "trtype": "TCP", 00:38:57.588 "adrfam": "IPv4", 00:38:57.588 "traddr": "127.0.0.1", 00:38:57.588 "trsvcid": "4420", 00:38:57.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.588 "prchk_reftag": false, 00:38:57.588 "prchk_guard": false, 00:38:57.588 "ctrlr_loss_timeout_sec": 0, 00:38:57.588 "reconnect_delay_sec": 0, 00:38:57.588 "fast_io_fail_timeout_sec": 0, 00:38:57.588 "psk": "key0", 00:38:57.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:57.588 "hdgst": false, 00:38:57.588 "ddgst": false, 00:38:57.588 "multipath": "multipath" 00:38:57.588 } 00:38:57.588 }, 00:38:57.588 { 00:38:57.588 "method": "bdev_nvme_set_hotplug", 00:38:57.588 "params": { 00:38:57.588 "period_us": 100000, 00:38:57.588 "enable": false 00:38:57.588 } 00:38:57.588 }, 00:38:57.588 { 00:38:57.588 "method": "bdev_wait_for_examine" 00:38:57.588 } 00:38:57.588 ] 00:38:57.588 }, 00:38:57.588 { 00:38:57.588 "subsystem": "nbd", 00:38:57.588 "config": [] 00:38:57.588 } 00:38:57.588 ] 00:38:57.588 }' 00:38:57.588 10:49:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.588 10:49:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:57.847 [2024-12-09 10:49:35.342529] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:38:57.847 [2024-12-09 10:49:35.342573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943578 ] 00:38:57.847 [2024-12-09 10:49:35.418063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.847 [2024-12-09 10:49:35.460441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.105 [2024-12-09 10:49:35.621854] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:58.669 10:49:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:58.669 10:49:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:58.669 10:49:36 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:58.669 10:49:36 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.669 10:49:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:58.669 10:49:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:58.669 10:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.926 10:49:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:58.926 10:49:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:58.926 10:49:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:58.926 10:49:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.926 10:49:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:58.926 10:49:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.926 10:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.183 10:49:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:59.183 10:49:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:59.183 10:49:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:59.183 10:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:59.441 10:49:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:59.441 10:49:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:59.441 10:49:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.o3W9rRTGrC /tmp/tmp.jMwBudL09u 00:38:59.441 10:49:36 keyring_file -- keyring/file.sh@20 -- # killprocess 2943578 00:38:59.441 10:49:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2943578 ']' 00:38:59.441 10:49:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2943578 00:38:59.441 10:49:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:59.441 10:49:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.441 10:49:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2943578 00:38:59.441 10:49:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:59.441 10:49:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:59.441 10:49:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2943578' 00:38:59.441 killing process with pid 2943578 00:38:59.441 10:49:37 keyring_file -- common/autotest_common.sh@973 -- # kill 2943578 00:38:59.441 Received shutdown signal, test time was about 1.000000 seconds 00:38:59.441 00:38:59.441 Latency(us) 00:38:59.441 [2024-12-09T09:49:37.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.441 [2024-12-09T09:49:37.165Z] =================================================================================================================== 00:38:59.441 [2024-12-09T09:49:37.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:59.441 10:49:37 keyring_file -- common/autotest_common.sh@978 -- # wait 2943578 00:38:59.699 10:49:37 keyring_file -- keyring/file.sh@21 -- # killprocess 2942055 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2942055 ']' 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2942055 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942055 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942055' 00:38:59.699 killing process with pid 2942055 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@973 -- # kill 2942055 00:38:59.699 10:49:37 keyring_file -- common/autotest_common.sh@978 -- # wait 2942055 00:38:59.959 00:38:59.959 real 0m11.744s 00:38:59.959 user 0m29.181s 00:38:59.959 sys 0m2.696s 00:38:59.959 10:49:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.959 10:49:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:59.959 ************************************ 00:38:59.959 END TEST keyring_file 00:38:59.959 ************************************ 00:38:59.959 10:49:37 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:59.959 10:49:37 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:59.959 10:49:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:59.959 10:49:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.959 10:49:37 -- common/autotest_common.sh@10 -- # set +x 00:38:59.959 ************************************ 00:38:59.959 START TEST keyring_linux 00:38:59.959 ************************************ 00:38:59.959 10:49:37 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:59.959 Joined session keyring: 33351422 00:39:00.219 * Looking for test storage... 00:39:00.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.219 10:49:37 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:00.219 10:49:37 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:39:00.219 10:49:37 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:00.219 10:49:37 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:00.219 10:49:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:00.220 10:49:37 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.220 10:49:37 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.220 --rc genhtml_branch_coverage=1 00:39:00.220 --rc genhtml_function_coverage=1 00:39:00.220 --rc genhtml_legend=1 00:39:00.220 --rc geninfo_all_blocks=1 00:39:00.220 --rc geninfo_unexecuted_blocks=1 00:39:00.220 00:39:00.220 ' 00:39:00.220 10:49:37 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.220 --rc genhtml_branch_coverage=1 00:39:00.220 --rc genhtml_function_coverage=1 00:39:00.220 --rc genhtml_legend=1 00:39:00.220 --rc geninfo_all_blocks=1 00:39:00.220 --rc geninfo_unexecuted_blocks=1 00:39:00.220 00:39:00.220 ' 00:39:00.220 10:49:37 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.220 --rc genhtml_branch_coverage=1 00:39:00.220 --rc genhtml_function_coverage=1 00:39:00.220 --rc genhtml_legend=1 00:39:00.220 --rc geninfo_all_blocks=1 00:39:00.220 --rc geninfo_unexecuted_blocks=1 00:39:00.220 00:39:00.220 ' 00:39:00.220 10:49:37 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:00.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.220 --rc genhtml_branch_coverage=1 00:39:00.220 --rc genhtml_function_coverage=1 00:39:00.220 --rc genhtml_legend=1 00:39:00.220 --rc geninfo_all_blocks=1 00:39:00.220 --rc geninfo_unexecuted_blocks=1 00:39:00.220 00:39:00.220 ' 00:39:00.220 10:49:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.220 10:49:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.220 10:49:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.220 10:49:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.220 10:49:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.220 10:49:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.220 10:49:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:00.220 10:49:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.220 10:49:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:00.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:00.221 /tmp/:spdk-test:key0 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:00.221 10:49:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:00.221 10:49:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:00.221 /tmp/:spdk-test:key1 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2944136 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2944136 00:39:00.221 10:49:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2944136 ']' 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.221 10:49:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:00.480 [2024-12-09 10:49:37.945373] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:39:00.480 [2024-12-09 10:49:37.945421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944136 ] 00:39:00.480 [2024-12-09 10:49:38.017368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.480 [2024-12-09 10:49:38.058989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.737 10:49:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 [2024-12-09 10:49:38.281366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.738 null0 00:39:00.738 [2024-12-09 10:49:38.313415] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:00.738 [2024-12-09 10:49:38.313762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:00.738 88047001 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:00.738 904090370 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2944141 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2944141 /var/tmp/bperf.sock 00:39:00.738 10:49:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2944141 ']' 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:00.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.738 10:49:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:00.738 [2024-12-09 10:49:38.386101] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:39:00.738 [2024-12-09 10:49:38.386143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944141 ] 00:39:00.995 [2024-12-09 10:49:38.461821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.995 [2024-12-09 10:49:38.503783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.995 10:49:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:00.995 10:49:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:00.995 10:49:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:00.995 10:49:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:01.253 10:49:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:01.253 10:49:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:01.511 10:49:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:01.511 10:49:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:01.511 [2024-12-09 10:49:39.129101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:01.511 nvme0n1 00:39:01.511 10:49:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:01.511 10:49:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:01.511 10:49:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:01.511 10:49:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:01.511 10:49:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.511 10:49:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:01.768 10:49:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:01.768 10:49:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:01.768 10:49:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:01.768 10:49:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:01.768 10:49:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.768 10:49:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.768 10:49:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@25 -- # sn=88047001 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 88047001 == \8\8\0\4\7\0\0\1 ]] 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 88047001 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:02.026 10:49:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:02.026 Running I/O for 1 seconds... 00:39:03.398 21415.00 IOPS, 83.65 MiB/s 00:39:03.398 Latency(us) 00:39:03.398 [2024-12-09T09:49:41.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.398 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:03.398 nvme0n1 : 1.01 21417.86 83.66 0.00 0.00 5957.03 4993.22 14293.09 00:39:03.399 [2024-12-09T09:49:41.123Z] =================================================================================================================== 00:39:03.399 [2024-12-09T09:49:41.123Z] Total : 21417.86 83.66 0.00 0.00 5957.03 4993.22 14293.09 00:39:03.399 { 00:39:03.399 "results": [ 00:39:03.399 { 00:39:03.399 "job": "nvme0n1", 00:39:03.399 "core_mask": "0x2", 00:39:03.399 "workload": "randread", 00:39:03.399 "status": "finished", 00:39:03.399 "queue_depth": 128, 00:39:03.399 "io_size": 4096, 00:39:03.399 "runtime": 1.005843, 00:39:03.399 "iops": 21417.85547048595, 00:39:03.399 "mibps": 83.66349793158574, 00:39:03.399 "io_failed": 0, 00:39:03.399 "io_timeout": 0, 00:39:03.399 "avg_latency_us": 5957.025720872763, 00:39:03.399 "min_latency_us": 4993.219047619048, 00:39:03.399 "max_latency_us": 14293.089523809524 00:39:03.399 } 00:39:03.399 ], 00:39:03.399 "core_count": 1 00:39:03.399 } 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:03.399 10:49:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:03.399 10:49:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.399 10:49:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:03.656 10:49:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:03.656 [2024-12-09 10:49:41.312241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:03.656 [2024-12-09 10:49:41.313065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188bbc0 (107): Transport endpoint is not connected 00:39:03.656 [2024-12-09 10:49:41.314060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188bbc0 (9): Bad file descriptor 00:39:03.656 [2024-12-09 10:49:41.315061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:03.656 [2024-12-09 10:49:41.315076] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:03.656 [2024-12-09 10:49:41.315084] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:03.656 [2024-12-09 10:49:41.315093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:03.656 request: 00:39:03.656 { 00:39:03.656 "name": "nvme0", 00:39:03.656 "trtype": "tcp", 00:39:03.656 "traddr": "127.0.0.1", 00:39:03.656 "adrfam": "ipv4", 00:39:03.656 "trsvcid": "4420", 00:39:03.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:03.656 "prchk_reftag": false, 00:39:03.656 "prchk_guard": false, 00:39:03.656 "hdgst": false, 00:39:03.656 "ddgst": false, 00:39:03.656 "psk": ":spdk-test:key1", 00:39:03.656 "allow_unrecognized_csi": false, 00:39:03.656 "method": "bdev_nvme_attach_controller", 00:39:03.656 "req_id": 1 00:39:03.656 } 00:39:03.656 Got JSON-RPC error response 00:39:03.656 response: 00:39:03.656 { 00:39:03.656 "code": -5, 00:39:03.656 "message": "Input/output error" 00:39:03.656 } 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:03.656 10:49:41 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@33 -- # sn=88047001 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 88047001 00:39:03.656 1 links removed 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:03.656 10:49:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:03.657 10:49:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:03.657 10:49:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:03.657 10:49:41 keyring_linux -- keyring/linux.sh@33 -- # sn=904090370 00:39:03.657 10:49:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 904090370 00:39:03.657 1 links removed 00:39:03.657 10:49:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2944141 00:39:03.657 10:49:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2944141 ']' 00:39:03.657 10:49:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2944141 00:39:03.657 10:49:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:03.657 10:49:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.657 10:49:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944141 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944141' 00:39:03.915 killing process with pid 2944141 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 2944141 00:39:03.915 Received shutdown signal, test time was about 1.000000 seconds 00:39:03.915 00:39:03.915 Latency(us) 00:39:03.915 [2024-12-09T09:49:41.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.915 [2024-12-09T09:49:41.639Z] =================================================================================================================== 00:39:03.915 [2024-12-09T09:49:41.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 2944141 00:39:03.915 10:49:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2944136 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2944136 ']' 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2944136 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944136 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944136' 00:39:03.915 killing process with pid 2944136 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 2944136 00:39:03.915 10:49:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 2944136 00:39:04.484 00:39:04.484 real 0m4.332s 00:39:04.484 user 0m8.114s 00:39:04.484 sys 0m1.455s 00:39:04.484 10:49:41 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:04.484 10:49:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:04.484 ************************************ 00:39:04.484 END TEST keyring_linux 00:39:04.484 ************************************ 00:39:04.484 10:49:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:04.484 10:49:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:04.484 10:49:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:04.484 10:49:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:04.484 10:49:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:04.484 10:49:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:04.484 10:49:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:04.484 10:49:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:04.484 10:49:41 -- common/autotest_common.sh@10 -- # set +x 00:39:04.484 10:49:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:04.484 10:49:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:04.484 10:49:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:04.484 10:49:41 -- common/autotest_common.sh@10 -- # set +x 00:39:09.765 INFO: APP EXITING 00:39:09.765 INFO: killing all VMs 00:39:09.765 INFO: killing vhost app 00:39:09.765 INFO: EXIT DONE 00:39:12.306 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:39:12.306 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:12.306 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:15.602 Cleaning 00:39:15.602 Removing: /var/run/dpdk/spdk0/config 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:15.602 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:15.602 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:15.602 Removing: /var/run/dpdk/spdk1/config 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:15.602 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:15.602 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:15.602 Removing: /var/run/dpdk/spdk2/config 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:15.602 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:15.602 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:15.602 Removing: /var/run/dpdk/spdk3/config 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:15.602 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:15.602 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:15.602 Removing: /var/run/dpdk/spdk4/config 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:15.602 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:15.602 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:15.602 Removing: /dev/shm/bdev_svc_trace.1 00:39:15.602 Removing: /dev/shm/nvmf_trace.0 00:39:15.602 Removing: /dev/shm/spdk_tgt_trace.pid2463960 00:39:15.602 Removing: /var/run/dpdk/spdk0 00:39:15.602 Removing: /var/run/dpdk/spdk1 00:39:15.602 Removing: /var/run/dpdk/spdk2 00:39:15.602 Removing: /var/run/dpdk/spdk3 00:39:15.602 Removing: /var/run/dpdk/spdk4 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2461589 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2462658 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2463960 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2464607 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2465555 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2465581 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2466570 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2466773 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2467036 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2468777 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2470657 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2470949 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2471238 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2471555 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2471845 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2472098 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2472344 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2472635 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2473378 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2476371 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2476626 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2476781 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2476894 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2477203 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2477388 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2477740 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2477888 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2478152 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2478162 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2478417 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2478577 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2479002 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2479257 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2479558 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2483479 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2487747 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2497770 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2498399 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2502639 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2502992 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2507297 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2513304 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2516294 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2526725 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2535650 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2537276 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2538198 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2555302 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2559307 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2605387 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2610711 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2617055 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2623558 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2623564 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2624475 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2625365 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2626122 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2626769 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2626776 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2627012 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2627152 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2627235 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2628054 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2628856 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2629770 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2630287 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2630449 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2630685 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2631712 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2632690 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2640865 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2669921 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2674367 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2676027 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2677762 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2677886 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2678124 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2678147 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2678644 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2680480 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2681308 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2681739 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2684001 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2684464 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2685174 00:39:15.602 Removing: /var/run/dpdk/spdk_pid2689785 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2695239 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2695240 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2695241 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2699244 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2707596 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2711625 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2717623 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2718936 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2720299 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2721798 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2726305 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2730861 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2735015 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2742958 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2742995 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2747491 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2747720 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2747947 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2748404 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2748411 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2752902 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2753474 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2757814 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2760562 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2765949 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2771243 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2779852 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2787579 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2787582 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2806380 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2806856 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2807558 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2808032 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2808769 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2809250 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2809774 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2810412 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2814662 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2814905 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2820921 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2821021 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2826506 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2830869 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2840900 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2841453 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2845701 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2845954 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2850140 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2855834 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2858415 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2868553 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2877372 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2879551 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2880468 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2896452 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2900367 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2903105 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2911142 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2911150 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2916275 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2918148 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2920113 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2921177 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2923756 00:39:15.863 Removing: /var/run/dpdk/spdk_pid2924918 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2933670 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2934140 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2934808 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2937078 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2937549 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2938072 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2942055 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2942065 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2943578 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2944136 00:39:16.123 Removing: /var/run/dpdk/spdk_pid2944141 00:39:16.123 Clean 00:39:16.123 10:49:53 -- common/autotest_common.sh@1453 -- # return 0 00:39:16.123 10:49:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:16.123 10:49:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.123 10:49:53 -- common/autotest_common.sh@10 -- # set +x 00:39:16.123 10:49:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:16.123 10:49:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.123 10:49:53 -- common/autotest_common.sh@10 -- # set +x 00:39:16.123 10:49:53 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:16.123 10:49:53 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:16.123 10:49:53 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:16.123 10:49:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:16.123 10:49:53 -- spdk/autotest.sh@398 -- # hostname 00:39:16.123 10:49:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:16.382 geninfo: WARNING: invalid characters removed from testname! 00:39:38.341 10:50:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:39.320 10:50:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:41.332 10:50:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:43.278 10:50:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:45.185 10:50:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.120 10:50:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:48.499 10:50:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:48.499 10:50:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:48.499 10:50:26 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:48.499 10:50:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:48.499 10:50:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:48.499 10:50:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:48.758 + [[ -n 2384857 ]] 00:39:48.758 + sudo kill 2384857 00:39:48.771 [Pipeline] } 00:39:48.785 [Pipeline] // stage 00:39:48.789 [Pipeline] } 00:39:48.801 [Pipeline] // timeout 00:39:48.806 [Pipeline] } 00:39:48.821 [Pipeline] // catchError 00:39:48.826 [Pipeline] } 00:39:48.841 [Pipeline] // wrap 00:39:48.846 [Pipeline] } 00:39:48.859 [Pipeline] // catchError 00:39:48.876 [Pipeline] stage 00:39:48.879 [Pipeline] { (Epilogue) 00:39:48.892 [Pipeline] catchError 00:39:48.894 [Pipeline] { 00:39:48.907 [Pipeline] echo 00:39:48.909 Cleanup processes 00:39:48.915 [Pipeline] sh 00:39:49.480 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:49.480 2954853 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:49.493 [Pipeline] sh 00:39:49.784 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:49.784 ++ grep -v 'sudo pgrep' 00:39:49.784 ++ awk '{print $1}' 00:39:49.784 + sudo kill -9 00:39:49.784 + true 00:39:49.796 [Pipeline] sh 00:39:50.087 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:02.315 [Pipeline] sh 00:40:02.605 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:02.605 Artifacts sizes are good 00:40:02.620 [Pipeline] archiveArtifacts 00:40:02.628 Archiving artifacts 00:40:03.045 [Pipeline] sh 00:40:03.333 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:03.348 [Pipeline] cleanWs 00:40:03.358 [WS-CLEANUP] Deleting project workspace... 00:40:03.358 [WS-CLEANUP] Deferred wipeout is used... 00:40:03.371 [WS-CLEANUP] done 00:40:03.373 [Pipeline] } 00:40:03.390 [Pipeline] // catchError 00:40:03.403 [Pipeline] sh 00:40:03.689 + logger -p user.info -t JENKINS-CI 00:40:03.698 [Pipeline] } 00:40:03.712 [Pipeline] // stage 00:40:03.717 [Pipeline] } 00:40:03.731 [Pipeline] // node 00:40:03.736 [Pipeline] End of Pipeline 00:40:03.770 Finished: SUCCESS